00:00:00.001 Started by upstream project "autotest-nightly" build number 4279 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3642 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.082 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.082 The recommended git tool is: git 00:00:00.082 using credential 00000000-0000-0000-0000-000000000002 00:00:00.084 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.123 Fetching changes from the remote Git repository 00:00:00.124 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.182 Using shallow fetch with depth 1 00:00:00.182 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.182 > git --version # timeout=10 00:00:00.225 > git --version # 'git version 2.39.2' 00:00:00.225 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.256 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.256 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.556 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.568 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.580 Checking out Revision 2fb890043673bc2650cdb1a52838125c51a12f85 (FETCH_HEAD) 00:00:06.580 > git config core.sparsecheckout # timeout=10 00:00:06.590 > git read-tree -mu HEAD # timeout=10 00:00:06.607 > git checkout -f 2fb890043673bc2650cdb1a52838125c51a12f85 # timeout=5 00:00:06.627 Commit message: "jenkins: update TLS certificates" 00:00:06.627 > git rev-list --no-walk 2fb890043673bc2650cdb1a52838125c51a12f85 # timeout=10 00:00:06.722 [Pipeline] Start of Pipeline 00:00:06.736 [Pipeline] library 00:00:06.738 Loading library shm_lib@master 00:00:06.738 Library shm_lib@master is cached. Copying from home. 00:00:06.754 [Pipeline] node 00:00:06.772 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.773 [Pipeline] { 00:00:06.781 [Pipeline] catchError 00:00:06.782 [Pipeline] { 00:00:06.792 [Pipeline] wrap 00:00:06.799 [Pipeline] { 00:00:06.806 [Pipeline] stage 00:00:06.808 [Pipeline] { (Prologue) 00:00:07.040 [Pipeline] sh 00:00:07.323 + logger -p user.info -t JENKINS-CI 00:00:07.343 [Pipeline] echo 00:00:07.344 Node: GP11 00:00:07.354 [Pipeline] sh 00:00:07.653 [Pipeline] setCustomBuildProperty 00:00:07.662 [Pipeline] echo 00:00:07.663 Cleanup processes 00:00:07.668 [Pipeline] sh 00:00:07.949 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.949 2739440 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.962 [Pipeline] sh 00:00:08.244 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.244 ++ grep -v 'sudo pgrep' 00:00:08.244 ++ awk '{print $1}' 00:00:08.244 + sudo kill -9 00:00:08.244 + true 00:00:08.256 [Pipeline] cleanWs 00:00:08.264 [WS-CLEANUP] Deleting project workspace... 00:00:08.264 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.270 [WS-CLEANUP] done 00:00:08.275 [Pipeline] setCustomBuildProperty 00:00:08.287 [Pipeline] sh 00:00:08.566 + sudo git config --global --replace-all safe.directory '*' 00:00:08.669 [Pipeline] httpRequest 00:00:09.258 [Pipeline] echo 00:00:09.260 Sorcerer 10.211.164.20 is alive 00:00:09.271 [Pipeline] retry 00:00:09.273 [Pipeline] { 00:00:09.289 [Pipeline] httpRequest 00:00:09.293 HttpMethod: GET 00:00:09.294 URL: http://10.211.164.20/packages/jbp_2fb890043673bc2650cdb1a52838125c51a12f85.tar.gz 00:00:09.294 Sending request to url: http://10.211.164.20/packages/jbp_2fb890043673bc2650cdb1a52838125c51a12f85.tar.gz 00:00:09.311 Response Code: HTTP/1.1 200 OK 00:00:09.312 Success: Status code 200 is in the accepted range: 200,404 00:00:09.312 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_2fb890043673bc2650cdb1a52838125c51a12f85.tar.gz 00:00:15.491 [Pipeline] } 00:00:15.508 [Pipeline] // retry 00:00:15.516 [Pipeline] sh 00:00:15.803 + tar --no-same-owner -xf jbp_2fb890043673bc2650cdb1a52838125c51a12f85.tar.gz 00:00:15.822 [Pipeline] httpRequest 00:00:16.212 [Pipeline] echo 00:00:16.214 Sorcerer 10.211.164.20 is alive 00:00:16.225 [Pipeline] retry 00:00:16.227 [Pipeline] { 00:00:16.244 [Pipeline] httpRequest 00:00:16.249 HttpMethod: GET 00:00:16.249 URL: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:16.250 Sending request to url: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:16.271 Response Code: HTTP/1.1 200 OK 00:00:16.272 Success: Status code 200 is in the accepted range: 200,404 00:00:16.272 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:01:17.370 [Pipeline] } 00:01:17.388 [Pipeline] // retry 00:01:17.396 [Pipeline] sh 00:01:17.682 + tar --no-same-owner -xf spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:01:20.992 [Pipeline] sh 00:01:21.279 + git -C spdk log --oneline -n5 00:01:21.279 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:01:21.279 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:01:21.279 4bcab9fb9 correct kick for CQ full case 00:01:21.279 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:01:21.279 318515b44 nvme/perf: interrupt mode support for pcie controller 00:01:21.291 [Pipeline] } 00:01:21.305 [Pipeline] // stage 00:01:21.314 [Pipeline] stage 00:01:21.317 [Pipeline] { (Prepare) 00:01:21.333 [Pipeline] writeFile 00:01:21.348 [Pipeline] sh 00:01:21.633 + logger -p user.info -t JENKINS-CI 00:01:21.648 [Pipeline] sh 00:01:21.936 + logger -p user.info -t JENKINS-CI 00:01:21.949 [Pipeline] sh 00:01:22.234 + cat autorun-spdk.conf 00:01:22.234 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.234 SPDK_TEST_NVMF=1 00:01:22.234 SPDK_TEST_NVME_CLI=1 00:01:22.234 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:22.234 SPDK_TEST_NVMF_NICS=e810 00:01:22.234 SPDK_RUN_ASAN=1 00:01:22.234 SPDK_RUN_UBSAN=1 00:01:22.234 NET_TYPE=phy 00:01:22.242 RUN_NIGHTLY=1 00:01:22.247 [Pipeline] readFile 00:01:22.271 [Pipeline] withEnv 00:01:22.273 [Pipeline] { 00:01:22.285 [Pipeline] sh 00:01:22.571 + set -ex 00:01:22.571 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:22.571 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:22.571 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.571 ++ SPDK_TEST_NVMF=1 00:01:22.571 ++ SPDK_TEST_NVME_CLI=1 00:01:22.571 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:22.571 ++ SPDK_TEST_NVMF_NICS=e810 00:01:22.571 ++ SPDK_RUN_ASAN=1 00:01:22.571 ++ SPDK_RUN_UBSAN=1 00:01:22.571 ++ NET_TYPE=phy 00:01:22.571 ++ RUN_NIGHTLY=1 00:01:22.571 + case $SPDK_TEST_NVMF_NICS in 00:01:22.571 + DRIVERS=ice 00:01:22.571 + [[ tcp == \r\d\m\a ]] 00:01:22.571 + [[ -n ice ]] 00:01:22.571 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:22.571 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:22.571 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:22.571 rmmod: ERROR: Module irdma is not currently loaded 00:01:22.571 rmmod: ERROR: Module i40iw is not currently loaded 00:01:22.571 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:22.571 + true 00:01:22.571 + for D in $DRIVERS 00:01:22.571 + sudo modprobe ice 00:01:22.571 + exit 0 00:01:22.582 [Pipeline] } 00:01:22.598 [Pipeline] // withEnv 00:01:22.604 [Pipeline] } 00:01:22.619 [Pipeline] // stage 00:01:22.629 [Pipeline] catchError 00:01:22.630 [Pipeline] { 00:01:22.644 [Pipeline] timeout 00:01:22.645 Timeout set to expire in 1 hr 0 min 00:01:22.647 [Pipeline] { 00:01:22.661 [Pipeline] stage 00:01:22.663 [Pipeline] { (Tests) 00:01:22.678 [Pipeline] sh 00:01:22.965 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:22.965 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:22.965 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:22.965 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:22.965 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:22.965 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:22.965 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:22.965 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:22.965 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:22.965 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:22.965 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:22.965 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:22.965 + source /etc/os-release 00:01:22.965 ++ NAME='Fedora Linux' 00:01:22.965 ++ VERSION='39 (Cloud Edition)' 00:01:22.965 ++ ID=fedora 00:01:22.965 ++ VERSION_ID=39 00:01:22.965 ++ VERSION_CODENAME= 00:01:22.965 ++ PLATFORM_ID=platform:f39 00:01:22.965 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:22.965 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:22.965 ++ LOGO=fedora-logo-icon 00:01:22.965 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:22.965 ++ HOME_URL=https://fedoraproject.org/ 00:01:22.965 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:22.965 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:22.965 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:22.965 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:22.965 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:22.965 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:22.965 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:22.965 ++ SUPPORT_END=2024-11-12 00:01:22.965 ++ VARIANT='Cloud Edition' 00:01:22.965 ++ VARIANT_ID=cloud 00:01:22.965 + uname -a 00:01:22.965 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:22.965 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:23.921 Hugepages 00:01:23.921 node hugesize free / total 00:01:23.921 node0 1048576kB 0 / 0 00:01:23.921 node0 2048kB 0 / 0 00:01:23.921 node1 1048576kB 0 / 0 00:01:23.921 node1 2048kB 0 / 0 00:01:23.921 00:01:23.921 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:23.921 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:23.921 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:23.921 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:23.921 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:23.921 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:23.921 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:23.921 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:23.921 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:23.921 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:23.921 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:23.921 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:23.921 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:23.921 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:23.921 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:23.921 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:23.921 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:23.921 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:23.921 + rm -f /tmp/spdk-ld-path 00:01:23.921 + source autorun-spdk.conf 00:01:23.921 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.921 ++ SPDK_TEST_NVMF=1 00:01:23.921 ++ SPDK_TEST_NVME_CLI=1 00:01:23.921 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:23.921 ++ SPDK_TEST_NVMF_NICS=e810 00:01:23.921 ++ SPDK_RUN_ASAN=1 00:01:23.921 ++ SPDK_RUN_UBSAN=1 00:01:23.921 ++ NET_TYPE=phy 00:01:23.921 ++ RUN_NIGHTLY=1 00:01:23.921 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:23.921 + [[ -n '' ]] 00:01:23.921 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:24.180 + for M in /var/spdk/build-*-manifest.txt 00:01:24.180 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:24.180 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:24.180 + for M in /var/spdk/build-*-manifest.txt 00:01:24.180 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:24.180 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:24.180 + for M in /var/spdk/build-*-manifest.txt 00:01:24.180 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:24.180 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:24.180 ++ uname 00:01:24.180 + [[ Linux == \L\i\n\u\x ]] 00:01:24.180 + sudo dmesg -T 00:01:24.180 + sudo dmesg --clear 00:01:24.180 + dmesg_pid=2740118 00:01:24.180 + [[ Fedora Linux == FreeBSD ]] 00:01:24.180 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:24.180 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:24.180 + sudo dmesg -Tw 00:01:24.180 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:24.180 + [[ -x /usr/src/fio-static/fio ]] 00:01:24.180 + export FIO_BIN=/usr/src/fio-static/fio 00:01:24.180 + FIO_BIN=/usr/src/fio-static/fio 00:01:24.180 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:24.180 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:24.180 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:24.180 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:24.180 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:24.180 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:24.180 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:24.180 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:24.180 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:24.180 11:29:49 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:24.180 11:29:49 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:24.180 11:29:49 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.180 11:29:49 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:24.180 11:29:49 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:24.180 11:29:49 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:24.180 11:29:49 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:24.180 11:29:49 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_RUN_ASAN=1 00:01:24.180 11:29:49 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:24.180 11:29:49 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:24.180 11:29:49 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=1 00:01:24.180 11:29:49 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:24.180 11:29:49 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:24.180 11:29:49 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:24.180 11:29:49 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:24.180 11:29:49 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:24.180 11:29:49 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:24.180 11:29:49 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:24.180 11:29:49 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:24.180 11:29:49 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.180 11:29:49 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.180 11:29:49 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.180 11:29:49 -- paths/export.sh@5 -- $ export PATH 00:01:24.180 11:29:49 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.180 11:29:49 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:24.180 11:29:49 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:24.180 11:29:49 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731925789.XXXXXX 00:01:24.180 11:29:49 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731925789.aDvG59 00:01:24.180 11:29:49 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:24.180 11:29:49 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:24.180 11:29:49 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:24.180 11:29:49 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:24.180 11:29:49 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:24.180 11:29:49 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:24.180 11:29:49 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:24.180 11:29:49 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.180 11:29:49 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:24.180 11:29:49 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:24.180 11:29:49 -- pm/common@17 -- $ local monitor 00:01:24.180 11:29:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.180 11:29:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.180 11:29:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.180 11:29:49 -- pm/common@21 -- $ date +%s 00:01:24.180 11:29:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.180 11:29:49 -- pm/common@21 -- $ date +%s 00:01:24.180 11:29:49 -- pm/common@25 -- $ sleep 1 00:01:24.181 11:29:49 -- pm/common@21 -- $ date +%s 00:01:24.181 11:29:49 -- pm/common@21 -- $ date +%s 00:01:24.181 11:29:49 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731925789 00:01:24.181 11:29:49 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731925789 00:01:24.181 11:29:49 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731925789 00:01:24.181 11:29:49 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731925789 00:01:24.181 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731925789_collect-cpu-load.pm.log 00:01:24.181 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731925789_collect-vmstat.pm.log 00:01:24.181 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731925789_collect-cpu-temp.pm.log 00:01:24.181 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731925789_collect-bmc-pm.bmc.pm.log 00:01:25.120 11:29:50 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:25.120 11:29:50 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:25.120 11:29:50 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:25.120 11:29:50 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:25.120 11:29:50 -- spdk/autobuild.sh@16 -- $ date -u 00:01:25.120 Mon Nov 18 10:29:50 AM UTC 2024 00:01:25.120 11:29:50 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:25.120 v25.01-pre-189-g83e8405e4 00:01:25.120 11:29:50 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:25.120 11:29:50 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:25.120 11:29:50 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:25.120 11:29:50 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:25.120 11:29:50 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.378 ************************************ 00:01:25.378 START TEST asan 00:01:25.378 ************************************ 00:01:25.378 11:29:51 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:25.378 using asan 00:01:25.378 00:01:25.378 real 0m0.000s 00:01:25.378 user 0m0.000s 00:01:25.378 sys 0m0.000s 00:01:25.378 11:29:51 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:25.378 11:29:51 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:25.378 ************************************ 00:01:25.378 END TEST asan 00:01:25.378 ************************************ 00:01:25.378 11:29:51 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:25.378 11:29:51 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:25.378 11:29:51 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:25.378 11:29:51 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:25.378 11:29:51 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.378 ************************************ 00:01:25.378 START TEST ubsan 00:01:25.378 ************************************ 00:01:25.378 11:29:51 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:25.378 using ubsan 00:01:25.378 00:01:25.378 real 0m0.000s 00:01:25.378 user 0m0.000s 00:01:25.378 sys 0m0.000s 00:01:25.378 11:29:51 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:25.378 11:29:51 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:25.378 ************************************ 00:01:25.378 END TEST ubsan 00:01:25.378 ************************************ 00:01:25.378 11:29:51 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:25.378 11:29:51 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:25.378 11:29:51 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:25.378 11:29:51 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:25.378 11:29:51 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:25.378 11:29:51 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:25.378 11:29:51 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:25.378 11:29:51 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:25.378 11:29:51 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:25.378 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:25.378 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:25.638 Using 'verbs' RDMA provider 00:01:36.186 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:46.171 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:46.171 Creating mk/config.mk...done. 00:01:46.171 Creating mk/cc.flags.mk...done. 00:01:46.171 Type 'make' to build. 00:01:46.171 11:30:11 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:01:46.171 11:30:11 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:46.171 11:30:11 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:46.171 11:30:11 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.171 ************************************ 00:01:46.171 START TEST make 00:01:46.171 ************************************ 00:01:46.171 11:30:11 make -- common/autotest_common.sh@1129 -- $ make -j48 00:01:46.171 make[1]: Nothing to be done for 'all'. 00:01:56.184 The Meson build system 00:01:56.184 Version: 1.5.0 00:01:56.184 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:56.184 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:56.184 Build type: native build 00:01:56.184 Program cat found: YES (/usr/bin/cat) 00:01:56.184 Project name: DPDK 00:01:56.184 Project version: 24.03.0 00:01:56.184 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:56.184 C linker for the host machine: cc ld.bfd 2.40-14 00:01:56.184 Host machine cpu family: x86_64 00:01:56.184 Host machine cpu: x86_64 00:01:56.184 Message: ## Building in Developer Mode ## 00:01:56.184 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:56.184 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:56.184 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:56.184 Program python3 found: YES (/usr/bin/python3) 00:01:56.184 Program cat found: YES (/usr/bin/cat) 00:01:56.184 Compiler for C supports arguments -march=native: YES 00:01:56.184 Checking for size of "void *" : 8 00:01:56.184 Checking for size of "void *" : 8 (cached) 00:01:56.184 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:56.184 Library m found: YES 00:01:56.184 Library numa found: YES 00:01:56.184 Has header "numaif.h" : YES 00:01:56.184 Library fdt found: NO 00:01:56.184 Library execinfo found: NO 00:01:56.184 Has header "execinfo.h" : YES 00:01:56.184 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:56.184 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:56.184 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:56.184 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:56.184 Run-time dependency openssl found: YES 3.1.1 00:01:56.184 Run-time dependency libpcap found: YES 1.10.4 00:01:56.184 Has header "pcap.h" with dependency libpcap: YES 00:01:56.184 Compiler for C supports arguments -Wcast-qual: YES 00:01:56.184 Compiler for C supports arguments -Wdeprecated: YES 00:01:56.184 Compiler for C supports arguments -Wformat: YES 00:01:56.184 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:56.184 Compiler for C supports arguments -Wformat-security: NO 00:01:56.184 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:56.184 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:56.184 Compiler for C supports arguments -Wnested-externs: YES 00:01:56.184 Compiler for C supports arguments -Wold-style-definition: YES 00:01:56.184 Compiler for C supports arguments -Wpointer-arith: YES 00:01:56.184 Compiler for C supports arguments -Wsign-compare: YES 00:01:56.184 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:56.184 Compiler for C supports arguments -Wundef: YES 00:01:56.184 Compiler for C supports arguments -Wwrite-strings: YES 00:01:56.184 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:56.184 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:56.184 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:56.184 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:56.184 Program objdump found: YES (/usr/bin/objdump) 00:01:56.184 Compiler for C supports arguments -mavx512f: YES 00:01:56.184 Checking if "AVX512 checking" compiles: YES 00:01:56.184 Fetching value of define "__SSE4_2__" : 1 00:01:56.184 Fetching value of define "__AES__" : 1 00:01:56.184 Fetching value of define "__AVX__" : 1 00:01:56.184 Fetching value of define "__AVX2__" : (undefined) 00:01:56.184 Fetching value of define "__AVX512BW__" : (undefined) 00:01:56.184 Fetching value of define "__AVX512CD__" : (undefined) 00:01:56.184 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:56.184 Fetching value of define "__AVX512F__" : (undefined) 00:01:56.184 Fetching value of define "__AVX512VL__" : (undefined) 00:01:56.184 Fetching value of define "__PCLMUL__" : 1 00:01:56.184 Fetching value of define "__RDRND__" : 1 00:01:56.184 Fetching value of define "__RDSEED__" : (undefined) 00:01:56.184 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:56.184 Fetching value of define "__znver1__" : (undefined) 00:01:56.184 Fetching value of define "__znver2__" : (undefined) 00:01:56.184 Fetching value of define "__znver3__" : (undefined) 00:01:56.184 Fetching value of define "__znver4__" : (undefined) 00:01:56.184 Library asan found: YES 00:01:56.184 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:56.184 Message: lib/log: Defining dependency "log" 00:01:56.184 Message: lib/kvargs: Defining dependency "kvargs" 00:01:56.184 Message: lib/telemetry: Defining dependency "telemetry" 00:01:56.184 Library rt found: YES 00:01:56.184 Checking for function "getentropy" : NO 00:01:56.184 Message: lib/eal: Defining dependency "eal" 00:01:56.184 Message: lib/ring: Defining dependency "ring" 00:01:56.184 Message: lib/rcu: Defining dependency "rcu" 00:01:56.184 Message: lib/mempool: Defining dependency "mempool" 00:01:56.184 Message: lib/mbuf: Defining dependency "mbuf" 00:01:56.184 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:56.184 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:56.184 Compiler for C supports arguments -mpclmul: YES 00:01:56.184 Compiler for C supports arguments -maes: YES 00:01:56.184 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:56.184 Compiler for C supports arguments -mavx512bw: YES 00:01:56.184 Compiler for C supports arguments -mavx512dq: YES 00:01:56.184 Compiler for C supports arguments -mavx512vl: YES 00:01:56.184 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:56.184 Compiler for C supports arguments -mavx2: YES 00:01:56.184 Compiler for C supports arguments -mavx: YES 00:01:56.184 Message: lib/net: Defining dependency "net" 00:01:56.184 Message: lib/meter: Defining dependency "meter" 00:01:56.184 Message: lib/ethdev: Defining dependency "ethdev" 00:01:56.184 Message: lib/pci: Defining dependency "pci" 00:01:56.184 Message: lib/cmdline: Defining dependency "cmdline" 00:01:56.184 Message: lib/hash: Defining dependency "hash" 00:01:56.184 Message: lib/timer: Defining dependency "timer" 00:01:56.184 Message: lib/compressdev: Defining dependency "compressdev" 00:01:56.184 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:56.184 Message: lib/dmadev: Defining dependency "dmadev" 00:01:56.184 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:56.184 Message: lib/power: Defining dependency "power" 00:01:56.185 Message: lib/reorder: Defining dependency "reorder" 00:01:56.185 Message: lib/security: Defining dependency "security" 00:01:56.185 Has header "linux/userfaultfd.h" : YES 00:01:56.185 Has header "linux/vduse.h" : YES 00:01:56.185 Message: lib/vhost: Defining dependency "vhost" 00:01:56.185 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:56.185 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:56.185 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:56.185 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:56.185 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:56.185 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:56.185 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:56.185 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:56.185 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:56.185 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:56.185 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:56.185 Configuring doxy-api-html.conf using configuration 00:01:56.185 Configuring doxy-api-man.conf using configuration 00:01:56.185 Program mandb found: YES (/usr/bin/mandb) 00:01:56.185 Program sphinx-build found: NO 00:01:56.185 Configuring rte_build_config.h using configuration 00:01:56.185 Message: 00:01:56.185 ================= 00:01:56.185 Applications Enabled 00:01:56.185 ================= 00:01:56.185 00:01:56.185 apps: 00:01:56.185 00:01:56.185 00:01:56.185 Message: 00:01:56.185 ================= 00:01:56.185 Libraries Enabled 00:01:56.185 ================= 00:01:56.185 00:01:56.185 libs: 00:01:56.185 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:56.185 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:56.185 cryptodev, dmadev, power, reorder, security, vhost, 00:01:56.185 00:01:56.185 Message: 00:01:56.185 =============== 00:01:56.185 Drivers Enabled 00:01:56.185 =============== 00:01:56.185 00:01:56.185 common: 00:01:56.185 00:01:56.185 bus: 00:01:56.185 pci, vdev, 00:01:56.185 mempool: 00:01:56.185 ring, 00:01:56.185 dma: 00:01:56.185 00:01:56.185 net: 00:01:56.185 00:01:56.185 crypto: 00:01:56.185 00:01:56.185 compress: 00:01:56.185 00:01:56.185 vdpa: 00:01:56.185 00:01:56.185 00:01:56.185 Message: 00:01:56.185 ================= 00:01:56.185 Content Skipped 00:01:56.185 ================= 00:01:56.185 00:01:56.185 apps: 00:01:56.185 dumpcap: explicitly disabled via build config 00:01:56.185 graph: explicitly disabled via build config 00:01:56.185 pdump: explicitly disabled via build config 00:01:56.185 proc-info: explicitly disabled via build config 00:01:56.185 test-acl: explicitly disabled via build config 00:01:56.185 test-bbdev: explicitly disabled via build config 00:01:56.185 test-cmdline: explicitly disabled via build config 00:01:56.185 test-compress-perf: explicitly disabled via build config 00:01:56.185 test-crypto-perf: explicitly disabled via build config 00:01:56.185 test-dma-perf: explicitly disabled via build config 00:01:56.185 test-eventdev: explicitly disabled via build config 00:01:56.185 test-fib: explicitly disabled via build config 00:01:56.185 test-flow-perf: explicitly disabled via build config 00:01:56.185 test-gpudev: explicitly disabled via build config 00:01:56.185 test-mldev: explicitly disabled via build config 00:01:56.185 test-pipeline: explicitly disabled via build config 00:01:56.185 test-pmd: explicitly disabled via build config 00:01:56.185 test-regex: explicitly disabled via build config 00:01:56.185 test-sad: explicitly disabled via build config 00:01:56.185 test-security-perf: explicitly disabled via build config 00:01:56.185 00:01:56.185 libs: 00:01:56.185 argparse: explicitly disabled via build config 00:01:56.185 metrics: explicitly disabled via build config 00:01:56.185 acl: explicitly disabled via build config 00:01:56.185 bbdev: explicitly disabled via build config 00:01:56.185 bitratestats: explicitly disabled via build config 00:01:56.185 bpf: explicitly disabled via build config 00:01:56.185 cfgfile: explicitly disabled via build config 00:01:56.185 distributor: explicitly disabled via build config 00:01:56.185 efd: explicitly disabled via build config 00:01:56.185 eventdev: explicitly disabled via build config 00:01:56.185 dispatcher: explicitly disabled via build config 00:01:56.185 gpudev: explicitly disabled via build config 00:01:56.185 gro: explicitly disabled via build config 00:01:56.185 gso: explicitly disabled via build config 00:01:56.185 ip_frag: explicitly disabled via build config 00:01:56.185 jobstats: explicitly disabled via build config 00:01:56.185 latencystats: explicitly disabled via build config 00:01:56.185 lpm: explicitly disabled via build config 00:01:56.185 member: explicitly disabled via build config 00:01:56.185 pcapng: explicitly disabled via build config 00:01:56.185 rawdev: explicitly disabled via build config 00:01:56.185 regexdev: explicitly disabled via build config 00:01:56.185 mldev: explicitly disabled via build config 00:01:56.185 rib: explicitly disabled via build config 00:01:56.185 sched: explicitly disabled via build config 00:01:56.185 stack: explicitly disabled via build config 00:01:56.185 ipsec: explicitly disabled via build config 00:01:56.185 pdcp: explicitly disabled via build config 00:01:56.185 fib: explicitly disabled via build config 00:01:56.185 port: explicitly disabled via build config 00:01:56.185 pdump: explicitly disabled via build config 00:01:56.185 table: explicitly disabled via build config 00:01:56.185 pipeline: explicitly disabled via build config 00:01:56.185 graph: explicitly disabled via build config 00:01:56.185 node: explicitly disabled via build config 00:01:56.185 00:01:56.185 drivers: 00:01:56.185 common/cpt: not in enabled drivers build config 00:01:56.185 common/dpaax: not in enabled drivers build config 00:01:56.185 common/iavf: not in enabled drivers build config 00:01:56.185 common/idpf: not in enabled drivers build config 00:01:56.185 common/ionic: not in enabled drivers build config 00:01:56.185 common/mvep: not in enabled drivers build config 00:01:56.185 common/octeontx: not in enabled drivers build config 00:01:56.185 bus/auxiliary: not in enabled drivers build config 00:01:56.185 bus/cdx: not in enabled drivers build config 00:01:56.185 bus/dpaa: not in enabled drivers build config 00:01:56.185 bus/fslmc: not in enabled drivers build config 00:01:56.185 bus/ifpga: not in enabled drivers build config 00:01:56.185 bus/platform: not in enabled drivers build config 00:01:56.185 bus/uacce: not in enabled drivers build config 00:01:56.185 bus/vmbus: not in enabled drivers build config 00:01:56.185 common/cnxk: not in enabled drivers build config 00:01:56.185 common/mlx5: not in enabled drivers build config 00:01:56.185 common/nfp: not in enabled drivers build config 00:01:56.185 common/nitrox: not in enabled drivers build config 00:01:56.185 common/qat: not in enabled drivers build config 00:01:56.185 common/sfc_efx: not in enabled drivers build config 00:01:56.185 mempool/bucket: not in enabled drivers build config 00:01:56.185 mempool/cnxk: not in enabled drivers build config 00:01:56.185 mempool/dpaa: not in enabled drivers build config 00:01:56.185 mempool/dpaa2: not in enabled drivers build config 00:01:56.185 mempool/octeontx: not in enabled drivers build config 00:01:56.185 mempool/stack: not in enabled drivers build config 00:01:56.185 dma/cnxk: not in enabled drivers build config 00:01:56.185 dma/dpaa: not in enabled drivers build config 00:01:56.185 dma/dpaa2: not in enabled drivers build config 00:01:56.185 dma/hisilicon: not in enabled drivers build config 00:01:56.185 dma/idxd: not in enabled drivers build config 00:01:56.185 dma/ioat: not in enabled drivers build config 00:01:56.185 dma/skeleton: not in enabled drivers build config 00:01:56.185 net/af_packet: not in enabled drivers build config 00:01:56.185 net/af_xdp: not in enabled drivers build config 00:01:56.185 net/ark: not in enabled drivers build config 00:01:56.185 net/atlantic: not in enabled drivers build config 00:01:56.185 net/avp: not in enabled drivers build config 00:01:56.185 net/axgbe: not in enabled drivers build config 00:01:56.185 net/bnx2x: not in enabled drivers build config 00:01:56.185 net/bnxt: not in enabled drivers build config 00:01:56.185 net/bonding: not in enabled drivers build config 00:01:56.185 net/cnxk: not in enabled drivers build config 00:01:56.185 net/cpfl: not in enabled drivers build config 00:01:56.185 net/cxgbe: not in enabled drivers build config 00:01:56.185 net/dpaa: not in enabled drivers build config 00:01:56.185 net/dpaa2: not in enabled drivers build config 00:01:56.185 net/e1000: not in enabled drivers build config 00:01:56.185 net/ena: not in enabled drivers build config 00:01:56.185 net/enetc: not in enabled drivers build config 00:01:56.185 net/enetfec: not in enabled drivers build config 00:01:56.185 net/enic: not in enabled drivers build config 00:01:56.185 net/failsafe: not in enabled drivers build config 00:01:56.185 net/fm10k: not in enabled drivers build config 00:01:56.185 net/gve: not in enabled drivers build config 00:01:56.185 net/hinic: not in enabled drivers build config 00:01:56.185 net/hns3: not in enabled drivers build config 00:01:56.185 net/i40e: not in enabled drivers build config 00:01:56.185 net/iavf: not in enabled drivers build config 00:01:56.185 net/ice: not in enabled drivers build config 00:01:56.185 net/idpf: not in enabled drivers build config 00:01:56.185 net/igc: not in enabled drivers build config 00:01:56.185 net/ionic: not in enabled drivers build config 00:01:56.185 net/ipn3ke: not in enabled drivers build config 00:01:56.185 net/ixgbe: not in enabled drivers build config 00:01:56.185 net/mana: not in enabled drivers build config 00:01:56.185 net/memif: not in enabled drivers build config 00:01:56.185 net/mlx4: not in enabled drivers build config 00:01:56.185 net/mlx5: not in enabled drivers build config 00:01:56.185 net/mvneta: not in enabled drivers build config 00:01:56.186 net/mvpp2: not in enabled drivers build config 00:01:56.186 net/netvsc: not in enabled drivers build config 00:01:56.186 net/nfb: not in enabled drivers build config 00:01:56.186 net/nfp: not in enabled drivers build config 00:01:56.186 net/ngbe: not in enabled drivers build config 00:01:56.186 net/null: not in enabled drivers build config 00:01:56.186 net/octeontx: not in enabled drivers build config 00:01:56.186 net/octeon_ep: not in enabled drivers build config 00:01:56.186 net/pcap: not in enabled drivers build config 00:01:56.186 net/pfe: not in enabled drivers build config 00:01:56.186 net/qede: not in enabled drivers build config 00:01:56.186 net/ring: not in enabled drivers build config 00:01:56.186 net/sfc: not in enabled drivers build config 00:01:56.186 net/softnic: not in enabled drivers build config 00:01:56.186 net/tap: not in enabled drivers build config 00:01:56.186 net/thunderx: not in enabled drivers build config 00:01:56.186 net/txgbe: not in enabled drivers build config 00:01:56.186 net/vdev_netvsc: not in enabled drivers build config 00:01:56.186 net/vhost: not in enabled drivers build config 00:01:56.186 net/virtio: not in enabled drivers build config 00:01:56.186 net/vmxnet3: not in enabled drivers build config 00:01:56.186 raw/*: missing internal dependency, "rawdev" 00:01:56.186 crypto/armv8: not in enabled drivers build config 00:01:56.186 crypto/bcmfs: not in enabled drivers build config 00:01:56.186 crypto/caam_jr: not in enabled drivers build config 00:01:56.186 crypto/ccp: not in enabled drivers build config 00:01:56.186 crypto/cnxk: not in enabled drivers build config 00:01:56.186 crypto/dpaa_sec: not in enabled drivers build config 00:01:56.186 crypto/dpaa2_sec: not in enabled drivers build config 00:01:56.186 crypto/ipsec_mb: not in enabled drivers build config 00:01:56.186 crypto/mlx5: not in enabled drivers build config 00:01:56.186 crypto/mvsam: not in enabled drivers build config 00:01:56.186 crypto/nitrox: not in enabled drivers build config 00:01:56.186 crypto/null: not in enabled drivers build config 00:01:56.186 crypto/octeontx: not in enabled drivers build config 00:01:56.186 crypto/openssl: not in enabled drivers build config 00:01:56.186 crypto/scheduler: not in enabled drivers build config 00:01:56.186 crypto/uadk: not in enabled drivers build config 00:01:56.186 crypto/virtio: not in enabled drivers build config 00:01:56.186 compress/isal: not in enabled drivers build config 00:01:56.186 compress/mlx5: not in enabled drivers build config 00:01:56.186 compress/nitrox: not in enabled drivers build config 00:01:56.186 compress/octeontx: not in enabled drivers build config 00:01:56.186 compress/zlib: not in enabled drivers build config 00:01:56.186 regex/*: missing internal dependency, "regexdev" 00:01:56.186 ml/*: missing internal dependency, "mldev" 00:01:56.186 vdpa/ifc: not in enabled drivers build config 00:01:56.186 vdpa/mlx5: not in enabled drivers build config 00:01:56.186 vdpa/nfp: not in enabled drivers build config 00:01:56.186 vdpa/sfc: not in enabled drivers build config 00:01:56.186 event/*: missing internal dependency, "eventdev" 00:01:56.186 baseband/*: missing internal dependency, "bbdev" 00:01:56.186 gpu/*: missing internal dependency, "gpudev" 00:01:56.186 00:01:56.186 00:01:56.186 Build targets in project: 85 00:01:56.186 00:01:56.186 DPDK 24.03.0 00:01:56.186 00:01:56.186 User defined options 00:01:56.186 buildtype : debug 00:01:56.186 default_library : shared 00:01:56.186 libdir : lib 00:01:56.186 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:56.186 b_sanitize : address 00:01:56.186 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:56.186 c_link_args : 00:01:56.186 cpu_instruction_set: native 00:01:56.186 disable_apps : test-acl,graph,test-dma-perf,test-gpudev,test-crypto-perf,test,test-security-perf,test-mldev,proc-info,test-pmd,test-pipeline,test-eventdev,test-cmdline,test-fib,pdump,test-flow-perf,test-bbdev,test-regex,test-sad,dumpcap,test-compress-perf 00:01:56.186 disable_libs : acl,bitratestats,graph,bbdev,jobstats,ipsec,gso,table,rib,node,mldev,sched,ip_frag,cfgfile,port,pcapng,pdcp,argparse,stack,eventdev,regexdev,distributor,gro,efd,pipeline,bpf,dispatcher,lpm,metrics,latencystats,pdump,gpudev,member,fib,rawdev 00:01:56.186 enable_docs : false 00:01:56.186 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:56.186 enable_kmods : false 00:01:56.186 max_lcores : 128 00:01:56.186 tests : false 00:01:56.186 00:01:56.186 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:56.186 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:56.186 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:56.186 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:56.186 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:56.186 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:56.186 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:56.186 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:56.186 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:56.186 [8/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:56.186 [9/268] Linking static target lib/librte_kvargs.a 00:01:56.186 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:56.186 [11/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:56.186 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:56.186 [13/268] Linking static target lib/librte_log.a 00:01:56.186 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:56.186 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:56.186 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:56.761 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.024 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:57.024 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:57.024 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:57.024 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:57.024 [22/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:57.025 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:57.025 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:57.025 [25/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:57.025 [26/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:57.025 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:57.025 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:57.025 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:57.025 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:57.025 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:57.025 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:57.025 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:57.025 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:57.025 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:57.025 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:57.025 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:57.025 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:57.025 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:57.025 [40/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:57.025 [41/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:57.025 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:57.025 [43/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:57.025 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:57.025 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:57.025 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:57.025 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:57.025 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:57.025 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:57.025 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:57.025 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:57.025 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:57.284 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:57.284 [54/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:57.284 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:57.284 [56/268] Linking static target lib/librte_telemetry.a 00:01:57.284 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:57.284 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:57.284 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:57.284 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:57.284 [61/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.573 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:57.573 [63/268] Linking target lib/librte_log.so.24.1 00:01:57.573 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:57.573 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:57.573 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:57.573 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:57.847 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:57.847 [69/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:57.847 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:57.847 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:57.847 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:57.847 [73/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:57.847 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:57.847 [75/268] Linking static target lib/librte_pci.a 00:01:57.847 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:57.847 [77/268] Linking target lib/librte_kvargs.so.24.1 00:01:57.847 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:58.136 [79/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:58.136 [80/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:58.136 [81/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:58.136 [82/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:58.136 [83/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:58.136 [84/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:58.136 [85/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:58.136 [86/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:58.136 [87/268] Linking static target lib/librte_meter.a 00:01:58.136 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:58.136 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:58.136 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:58.137 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:58.137 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:58.137 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:58.137 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:58.137 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:58.137 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:58.137 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:58.137 [98/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:58.137 [99/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:58.137 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:58.137 [101/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:58.137 [102/268] Linking static target lib/librte_ring.a 00:01:58.137 [103/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:58.137 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:58.137 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:58.137 [106/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:58.137 [107/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:58.137 [108/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.410 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:58.410 [110/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:58.410 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:58.410 [112/268] Linking target lib/librte_telemetry.so.24.1 00:01:58.410 [113/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:58.410 [114/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:58.410 [115/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:58.410 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:58.410 [117/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.410 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:58.410 [119/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:58.410 [120/268] Linking static target lib/librte_mempool.a 00:01:58.669 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:58.669 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:58.669 [123/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.669 [124/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:58.669 [125/268] Linking static target lib/librte_rcu.a 00:01:58.669 [126/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:58.669 [127/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:58.669 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:58.669 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:58.669 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:58.931 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:58.931 [132/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.931 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:58.931 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:58.931 [135/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:58.931 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:58.931 [137/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:58.931 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:58.931 [139/268] Linking static target lib/librte_cmdline.a 00:01:58.931 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:58.931 [141/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:59.192 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:59.192 [143/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:59.192 [144/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:59.192 [145/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:59.192 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:59.192 [147/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:59.192 [148/268] Linking static target lib/librte_eal.a 00:01:59.192 [149/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:59.192 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:59.192 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:59.192 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:59.192 [153/268] Linking static target lib/librte_timer.a 00:01:59.192 [154/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.451 [155/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:59.451 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:59.451 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:59.451 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:59.451 [159/268] Linking static target lib/librte_dmadev.a 00:01:59.711 [160/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:59.711 [161/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.711 [162/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:59.711 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.711 [164/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:59.970 [165/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:59.970 [166/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:59.970 [167/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:59.970 [168/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:59.970 [169/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:59.970 [170/268] Linking static target lib/librte_net.a 00:01:59.970 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:59.970 [172/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:59.970 [173/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:59.970 [174/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:59.970 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:59.970 [176/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.970 [177/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.970 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:00.229 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:00.229 [180/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:00.229 [181/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:00.229 [182/268] Linking static target lib/librte_power.a 00:02:00.229 [183/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:00.229 [184/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:00.229 [185/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:00.229 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:00.229 [187/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.229 [188/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:00.229 [189/268] Linking static target lib/librte_compressdev.a 00:02:00.488 [190/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:00.488 [191/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:00.488 [192/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:00.488 [193/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:00.488 [194/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:00.488 [195/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:00.488 [196/268] Linking static target drivers/librte_bus_vdev.a 00:02:00.488 [197/268] Linking static target drivers/librte_bus_pci.a 00:02:00.488 [198/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:00.488 [199/268] Linking static target lib/librte_hash.a 00:02:00.488 [200/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:00.488 [201/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:00.488 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:00.747 [203/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:00.747 [204/268] Linking static target lib/librte_reorder.a 00:02:00.747 [205/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.747 [206/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.747 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:00.747 [208/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.747 [209/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:00.747 [210/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:00.747 [211/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:00.747 [212/268] Linking static target drivers/librte_mempool_ring.a 00:02:00.747 [213/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.005 [214/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.005 [215/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.263 [216/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:01.263 [217/268] Linking static target lib/librte_security.a 00:02:01.521 [218/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.779 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:02.346 [220/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:02.346 [221/268] Linking static target lib/librte_mbuf.a 00:02:02.912 [222/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.912 [223/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:02.912 [224/268] Linking static target lib/librte_cryptodev.a 00:02:03.847 [225/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.847 [226/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:03.847 [227/268] Linking static target lib/librte_ethdev.a 00:02:05.223 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.223 [229/268] Linking target lib/librte_eal.so.24.1 00:02:05.223 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:05.223 [231/268] Linking target lib/librte_meter.so.24.1 00:02:05.223 [232/268] Linking target lib/librte_ring.so.24.1 00:02:05.223 [233/268] Linking target lib/librte_pci.so.24.1 00:02:05.223 [234/268] Linking target lib/librte_timer.so.24.1 00:02:05.223 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:05.223 [236/268] Linking target lib/librte_dmadev.so.24.1 00:02:05.223 [237/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:05.223 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:05.223 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:05.223 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:05.223 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:05.481 [242/268] Linking target lib/librte_rcu.so.24.1 00:02:05.481 [243/268] Linking target lib/librte_mempool.so.24.1 00:02:05.481 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:05.481 [245/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:05.481 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:05.481 [247/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:05.481 [248/268] Linking target lib/librte_mbuf.so.24.1 00:02:05.740 [249/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:05.740 [250/268] Linking target lib/librte_compressdev.so.24.1 00:02:05.740 [251/268] Linking target lib/librte_reorder.so.24.1 00:02:05.740 [252/268] Linking target lib/librte_net.so.24.1 00:02:05.740 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:02:05.740 [254/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:05.740 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:05.999 [256/268] Linking target lib/librte_cmdline.so.24.1 00:02:05.999 [257/268] Linking target lib/librte_security.so.24.1 00:02:05.999 [258/268] Linking target lib/librte_hash.so.24.1 00:02:05.999 [259/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:06.566 [260/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:07.948 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.948 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:07.948 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:08.207 [264/268] Linking target lib/librte_power.so.24.1 00:02:34.744 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:34.744 [266/268] Linking static target lib/librte_vhost.a 00:02:34.744 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.744 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:34.744 INFO: autodetecting backend as ninja 00:02:34.744 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:34.744 CC lib/ut/ut.o 00:02:34.744 CC lib/log/log.o 00:02:34.744 CC lib/log/log_flags.o 00:02:34.744 CC lib/log/log_deprecated.o 00:02:34.744 CC lib/ut_mock/mock.o 00:02:34.744 LIB libspdk_ut.a 00:02:34.744 LIB libspdk_log.a 00:02:34.744 LIB libspdk_ut_mock.a 00:02:34.744 SO libspdk_ut.so.2.0 00:02:34.744 SO libspdk_log.so.7.1 00:02:34.744 SO libspdk_ut_mock.so.6.0 00:02:34.744 SYMLINK libspdk_ut.so 00:02:34.744 SYMLINK libspdk_log.so 00:02:34.744 SYMLINK libspdk_ut_mock.so 00:02:35.002 CC lib/dma/dma.o 00:02:35.002 CXX lib/trace_parser/trace.o 00:02:35.002 CC lib/util/base64.o 00:02:35.002 CC lib/ioat/ioat.o 00:02:35.002 CC lib/util/bit_array.o 00:02:35.002 CC lib/util/cpuset.o 00:02:35.002 CC lib/util/crc16.o 00:02:35.002 CC lib/util/crc32.o 00:02:35.002 CC lib/util/crc32c.o 00:02:35.002 CC lib/util/crc32_ieee.o 00:02:35.002 CC lib/util/crc64.o 00:02:35.002 CC lib/util/dif.o 00:02:35.002 CC lib/util/fd.o 00:02:35.002 CC lib/util/fd_group.o 00:02:35.002 CC lib/util/file.o 00:02:35.002 CC lib/util/hexlify.o 00:02:35.002 CC lib/util/iov.o 00:02:35.002 CC lib/util/math.o 00:02:35.002 CC lib/util/net.o 00:02:35.002 CC lib/util/pipe.o 00:02:35.002 CC lib/util/strerror_tls.o 00:02:35.002 CC lib/util/string.o 00:02:35.002 CC lib/util/xor.o 00:02:35.002 CC lib/util/uuid.o 00:02:35.002 CC lib/util/zipf.o 00:02:35.002 CC lib/util/md5.o 00:02:35.002 CC lib/vfio_user/host/vfio_user_pci.o 00:02:35.002 CC lib/vfio_user/host/vfio_user.o 00:02:35.260 LIB libspdk_dma.a 00:02:35.260 SO libspdk_dma.so.5.0 00:02:35.260 SYMLINK libspdk_dma.so 00:02:35.260 LIB libspdk_ioat.a 00:02:35.260 SO libspdk_ioat.so.7.0 00:02:35.260 SYMLINK libspdk_ioat.so 00:02:35.260 LIB libspdk_vfio_user.a 00:02:35.518 SO libspdk_vfio_user.so.5.0 00:02:35.518 SYMLINK libspdk_vfio_user.so 00:02:35.776 LIB libspdk_util.a 00:02:35.776 SO libspdk_util.so.10.1 00:02:35.776 SYMLINK libspdk_util.so 00:02:36.037 LIB libspdk_trace_parser.a 00:02:36.037 SO libspdk_trace_parser.so.6.0 00:02:36.037 CC lib/json/json_parse.o 00:02:36.037 CC lib/vmd/vmd.o 00:02:36.037 CC lib/json/json_util.o 00:02:36.037 CC lib/rdma_utils/rdma_utils.o 00:02:36.037 CC lib/vmd/led.o 00:02:36.037 CC lib/json/json_write.o 00:02:36.037 CC lib/idxd/idxd.o 00:02:36.037 CC lib/env_dpdk/env.o 00:02:36.037 CC lib/idxd/idxd_user.o 00:02:36.037 CC lib/env_dpdk/memory.o 00:02:36.037 CC lib/idxd/idxd_kernel.o 00:02:36.037 CC lib/env_dpdk/pci.o 00:02:36.037 CC lib/conf/conf.o 00:02:36.037 CC lib/env_dpdk/init.o 00:02:36.037 CC lib/env_dpdk/threads.o 00:02:36.037 CC lib/env_dpdk/pci_ioat.o 00:02:36.037 CC lib/env_dpdk/pci_virtio.o 00:02:36.037 CC lib/env_dpdk/pci_vmd.o 00:02:36.037 CC lib/env_dpdk/pci_idxd.o 00:02:36.037 CC lib/env_dpdk/pci_event.o 00:02:36.037 CC lib/env_dpdk/pci_dpdk.o 00:02:36.037 CC lib/env_dpdk/sigbus_handler.o 00:02:36.037 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:36.037 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:36.037 SYMLINK libspdk_trace_parser.so 00:02:36.295 LIB libspdk_conf.a 00:02:36.295 SO libspdk_conf.so.6.0 00:02:36.295 LIB libspdk_rdma_utils.a 00:02:36.553 SYMLINK libspdk_conf.so 00:02:36.553 SO libspdk_rdma_utils.so.1.0 00:02:36.553 LIB libspdk_json.a 00:02:36.553 SO libspdk_json.so.6.0 00:02:36.553 SYMLINK libspdk_rdma_utils.so 00:02:36.553 SYMLINK libspdk_json.so 00:02:36.553 CC lib/rdma_provider/common.o 00:02:36.553 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:36.812 CC lib/jsonrpc/jsonrpc_server.o 00:02:36.812 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:36.812 CC lib/jsonrpc/jsonrpc_client.o 00:02:36.812 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:36.812 LIB libspdk_rdma_provider.a 00:02:36.812 LIB libspdk_idxd.a 00:02:36.812 SO libspdk_rdma_provider.so.7.0 00:02:37.070 SO libspdk_idxd.so.12.1 00:02:37.070 SYMLINK libspdk_rdma_provider.so 00:02:37.070 LIB libspdk_vmd.a 00:02:37.070 SYMLINK libspdk_idxd.so 00:02:37.070 LIB libspdk_jsonrpc.a 00:02:37.070 SO libspdk_vmd.so.6.0 00:02:37.070 SO libspdk_jsonrpc.so.6.0 00:02:37.070 SYMLINK libspdk_vmd.so 00:02:37.070 SYMLINK libspdk_jsonrpc.so 00:02:37.328 CC lib/rpc/rpc.o 00:02:37.586 LIB libspdk_rpc.a 00:02:37.586 SO libspdk_rpc.so.6.0 00:02:37.586 SYMLINK libspdk_rpc.so 00:02:37.844 CC lib/notify/notify.o 00:02:37.844 CC lib/trace/trace.o 00:02:37.844 CC lib/trace/trace_flags.o 00:02:37.844 CC lib/notify/notify_rpc.o 00:02:37.844 CC lib/trace/trace_rpc.o 00:02:37.844 CC lib/keyring/keyring.o 00:02:37.844 CC lib/keyring/keyring_rpc.o 00:02:37.844 LIB libspdk_notify.a 00:02:37.844 SO libspdk_notify.so.6.0 00:02:38.103 SYMLINK libspdk_notify.so 00:02:38.103 LIB libspdk_keyring.a 00:02:38.103 SO libspdk_keyring.so.2.0 00:02:38.103 LIB libspdk_trace.a 00:02:38.103 SO libspdk_trace.so.11.0 00:02:38.103 SYMLINK libspdk_keyring.so 00:02:38.103 SYMLINK libspdk_trace.so 00:02:38.361 CC lib/sock/sock.o 00:02:38.361 CC lib/sock/sock_rpc.o 00:02:38.361 CC lib/thread/thread.o 00:02:38.361 CC lib/thread/iobuf.o 00:02:38.928 LIB libspdk_sock.a 00:02:38.928 SO libspdk_sock.so.10.0 00:02:38.928 SYMLINK libspdk_sock.so 00:02:38.928 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:38.928 CC lib/nvme/nvme_ctrlr.o 00:02:38.928 CC lib/nvme/nvme_fabric.o 00:02:38.928 CC lib/nvme/nvme_ns_cmd.o 00:02:38.928 CC lib/nvme/nvme_ns.o 00:02:38.928 CC lib/nvme/nvme_pcie_common.o 00:02:38.928 CC lib/nvme/nvme_pcie.o 00:02:38.928 CC lib/nvme/nvme_qpair.o 00:02:38.928 CC lib/nvme/nvme.o 00:02:38.928 CC lib/nvme/nvme_quirks.o 00:02:38.928 CC lib/nvme/nvme_transport.o 00:02:38.928 CC lib/nvme/nvme_discovery.o 00:02:38.928 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:38.928 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:38.928 CC lib/nvme/nvme_tcp.o 00:02:38.928 CC lib/nvme/nvme_opal.o 00:02:38.928 CC lib/nvme/nvme_io_msg.o 00:02:38.928 CC lib/nvme/nvme_poll_group.o 00:02:38.928 CC lib/nvme/nvme_zns.o 00:02:38.928 CC lib/nvme/nvme_stubs.o 00:02:38.928 CC lib/nvme/nvme_auth.o 00:02:38.928 CC lib/nvme/nvme_rdma.o 00:02:38.928 CC lib/nvme/nvme_cuse.o 00:02:39.186 LIB libspdk_env_dpdk.a 00:02:39.186 SO libspdk_env_dpdk.so.15.1 00:02:39.186 SYMLINK libspdk_env_dpdk.so 00:02:40.563 LIB libspdk_thread.a 00:02:40.563 SO libspdk_thread.so.11.0 00:02:40.563 SYMLINK libspdk_thread.so 00:02:40.563 CC lib/blob/blobstore.o 00:02:40.563 CC lib/virtio/virtio.o 00:02:40.563 CC lib/fsdev/fsdev.o 00:02:40.563 CC lib/init/json_config.o 00:02:40.563 CC lib/blob/request.o 00:02:40.563 CC lib/accel/accel.o 00:02:40.563 CC lib/virtio/virtio_vhost_user.o 00:02:40.563 CC lib/fsdev/fsdev_io.o 00:02:40.563 CC lib/blob/zeroes.o 00:02:40.563 CC lib/init/subsystem.o 00:02:40.563 CC lib/blob/blob_bs_dev.o 00:02:40.563 CC lib/fsdev/fsdev_rpc.o 00:02:40.563 CC lib/virtio/virtio_vfio_user.o 00:02:40.563 CC lib/accel/accel_rpc.o 00:02:40.563 CC lib/init/subsystem_rpc.o 00:02:40.563 CC lib/init/rpc.o 00:02:40.563 CC lib/virtio/virtio_pci.o 00:02:40.563 CC lib/accel/accel_sw.o 00:02:40.821 LIB libspdk_init.a 00:02:41.079 SO libspdk_init.so.6.0 00:02:41.079 SYMLINK libspdk_init.so 00:02:41.079 LIB libspdk_virtio.a 00:02:41.079 SO libspdk_virtio.so.7.0 00:02:41.079 SYMLINK libspdk_virtio.so 00:02:41.079 CC lib/event/app.o 00:02:41.079 CC lib/event/reactor.o 00:02:41.079 CC lib/event/log_rpc.o 00:02:41.079 CC lib/event/app_rpc.o 00:02:41.079 CC lib/event/scheduler_static.o 00:02:41.646 LIB libspdk_fsdev.a 00:02:41.646 SO libspdk_fsdev.so.2.0 00:02:41.646 SYMLINK libspdk_fsdev.so 00:02:41.646 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:41.904 LIB libspdk_event.a 00:02:41.904 SO libspdk_event.so.14.0 00:02:41.904 SYMLINK libspdk_event.so 00:02:42.163 LIB libspdk_nvme.a 00:02:42.163 LIB libspdk_accel.a 00:02:42.163 SO libspdk_accel.so.16.0 00:02:42.163 SO libspdk_nvme.so.15.0 00:02:42.163 SYMLINK libspdk_accel.so 00:02:42.422 CC lib/bdev/bdev.o 00:02:42.422 CC lib/bdev/bdev_rpc.o 00:02:42.422 CC lib/bdev/bdev_zone.o 00:02:42.422 CC lib/bdev/part.o 00:02:42.422 CC lib/bdev/scsi_nvme.o 00:02:42.422 SYMLINK libspdk_nvme.so 00:02:42.680 LIB libspdk_fuse_dispatcher.a 00:02:42.680 SO libspdk_fuse_dispatcher.so.1.0 00:02:42.680 SYMLINK libspdk_fuse_dispatcher.so 00:02:45.208 LIB libspdk_blob.a 00:02:45.208 SO libspdk_blob.so.11.0 00:02:45.208 SYMLINK libspdk_blob.so 00:02:45.466 CC lib/blobfs/blobfs.o 00:02:45.466 CC lib/blobfs/tree.o 00:02:45.466 CC lib/lvol/lvol.o 00:02:45.725 LIB libspdk_bdev.a 00:02:45.725 SO libspdk_bdev.so.17.0 00:02:45.983 SYMLINK libspdk_bdev.so 00:02:46.247 CC lib/scsi/dev.o 00:02:46.247 CC lib/nvmf/ctrlr.o 00:02:46.247 CC lib/scsi/lun.o 00:02:46.247 CC lib/ublk/ublk.o 00:02:46.247 CC lib/nbd/nbd.o 00:02:46.247 CC lib/nvmf/ctrlr_discovery.o 00:02:46.247 CC lib/ftl/ftl_core.o 00:02:46.247 CC lib/scsi/port.o 00:02:46.247 CC lib/ublk/ublk_rpc.o 00:02:46.247 CC lib/nvmf/ctrlr_bdev.o 00:02:46.247 CC lib/ftl/ftl_init.o 00:02:46.247 CC lib/scsi/scsi.o 00:02:46.247 CC lib/nbd/nbd_rpc.o 00:02:46.247 CC lib/scsi/scsi_bdev.o 00:02:46.247 CC lib/nvmf/subsystem.o 00:02:46.247 CC lib/ftl/ftl_layout.o 00:02:46.247 CC lib/scsi/scsi_pr.o 00:02:46.247 CC lib/nvmf/nvmf.o 00:02:46.247 CC lib/nvmf/nvmf_rpc.o 00:02:46.247 CC lib/ftl/ftl_io.o 00:02:46.247 CC lib/ftl/ftl_debug.o 00:02:46.247 CC lib/scsi/scsi_rpc.o 00:02:46.247 CC lib/ftl/ftl_sb.o 00:02:46.247 CC lib/scsi/task.o 00:02:46.247 CC lib/nvmf/transport.o 00:02:46.247 CC lib/nvmf/stubs.o 00:02:46.247 CC lib/nvmf/tcp.o 00:02:46.247 CC lib/ftl/ftl_l2p.o 00:02:46.247 CC lib/ftl/ftl_l2p_flat.o 00:02:46.247 CC lib/nvmf/rdma.o 00:02:46.247 CC lib/nvmf/mdns_server.o 00:02:46.247 CC lib/nvmf/auth.o 00:02:46.247 CC lib/ftl/ftl_nv_cache.o 00:02:46.247 CC lib/ftl/ftl_band.o 00:02:46.247 CC lib/ftl/ftl_writer.o 00:02:46.247 CC lib/ftl/ftl_band_ops.o 00:02:46.247 CC lib/ftl/ftl_rq.o 00:02:46.247 CC lib/ftl/ftl_reloc.o 00:02:46.247 CC lib/ftl/ftl_l2p_cache.o 00:02:46.247 CC lib/ftl/ftl_p2l.o 00:02:46.247 CC lib/ftl/ftl_p2l_log.o 00:02:46.247 CC lib/ftl/mngt/ftl_mngt.o 00:02:46.247 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:46.247 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:46.247 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:46.247 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:46.506 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:46.506 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:46.506 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:46.506 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:46.506 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:46.506 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:46.506 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:46.506 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:46.506 CC lib/ftl/utils/ftl_conf.o 00:02:46.506 CC lib/ftl/utils/ftl_md.o 00:02:46.506 CC lib/ftl/utils/ftl_mempool.o 00:02:46.506 CC lib/ftl/utils/ftl_bitmap.o 00:02:46.506 CC lib/ftl/utils/ftl_property.o 00:02:46.768 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:46.768 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:46.768 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:46.768 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:46.768 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:46.768 LIB libspdk_blobfs.a 00:02:46.768 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:46.768 SO libspdk_blobfs.so.10.0 00:02:46.768 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:47.028 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:47.028 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:47.028 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:47.028 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:47.028 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:47.028 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:47.028 SYMLINK libspdk_blobfs.so 00:02:47.028 CC lib/ftl/base/ftl_base_dev.o 00:02:47.028 CC lib/ftl/base/ftl_base_bdev.o 00:02:47.028 CC lib/ftl/ftl_trace.o 00:02:47.028 LIB libspdk_lvol.a 00:02:47.028 LIB libspdk_nbd.a 00:02:47.028 SO libspdk_lvol.so.10.0 00:02:47.028 SO libspdk_nbd.so.7.0 00:02:47.287 SYMLINK libspdk_lvol.so 00:02:47.287 SYMLINK libspdk_nbd.so 00:02:47.287 LIB libspdk_scsi.a 00:02:47.287 SO libspdk_scsi.so.9.0 00:02:47.545 SYMLINK libspdk_scsi.so 00:02:47.545 LIB libspdk_ublk.a 00:02:47.545 SO libspdk_ublk.so.3.0 00:02:47.545 SYMLINK libspdk_ublk.so 00:02:47.545 CC lib/vhost/vhost.o 00:02:47.545 CC lib/iscsi/conn.o 00:02:47.545 CC lib/vhost/vhost_rpc.o 00:02:47.545 CC lib/vhost/vhost_scsi.o 00:02:47.545 CC lib/iscsi/init_grp.o 00:02:47.545 CC lib/iscsi/iscsi.o 00:02:47.545 CC lib/vhost/vhost_blk.o 00:02:47.545 CC lib/vhost/rte_vhost_user.o 00:02:47.545 CC lib/iscsi/portal_grp.o 00:02:47.545 CC lib/iscsi/param.o 00:02:47.545 CC lib/iscsi/tgt_node.o 00:02:47.545 CC lib/iscsi/iscsi_subsystem.o 00:02:47.545 CC lib/iscsi/iscsi_rpc.o 00:02:47.545 CC lib/iscsi/task.o 00:02:48.112 LIB libspdk_ftl.a 00:02:48.112 SO libspdk_ftl.so.9.0 00:02:48.371 SYMLINK libspdk_ftl.so 00:02:48.937 LIB libspdk_vhost.a 00:02:49.196 SO libspdk_vhost.so.8.0 00:02:49.196 SYMLINK libspdk_vhost.so 00:02:49.456 LIB libspdk_iscsi.a 00:02:49.715 SO libspdk_iscsi.so.8.0 00:02:49.715 LIB libspdk_nvmf.a 00:02:49.715 SO libspdk_nvmf.so.20.0 00:02:49.715 SYMLINK libspdk_iscsi.so 00:02:50.025 SYMLINK libspdk_nvmf.so 00:02:50.322 CC module/env_dpdk/env_dpdk_rpc.o 00:02:50.322 CC module/scheduler/gscheduler/gscheduler.o 00:02:50.322 CC module/accel/error/accel_error.o 00:02:50.322 CC module/sock/posix/posix.o 00:02:50.322 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:50.322 CC module/accel/error/accel_error_rpc.o 00:02:50.322 CC module/keyring/file/keyring.o 00:02:50.322 CC module/accel/ioat/accel_ioat.o 00:02:50.322 CC module/blob/bdev/blob_bdev.o 00:02:50.322 CC module/accel/ioat/accel_ioat_rpc.o 00:02:50.322 CC module/keyring/file/keyring_rpc.o 00:02:50.322 CC module/keyring/linux/keyring.o 00:02:50.322 CC module/fsdev/aio/fsdev_aio.o 00:02:50.322 CC module/accel/dsa/accel_dsa.o 00:02:50.322 CC module/keyring/linux/keyring_rpc.o 00:02:50.322 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:50.322 CC module/accel/iaa/accel_iaa.o 00:02:50.322 CC module/accel/dsa/accel_dsa_rpc.o 00:02:50.322 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:50.322 CC module/fsdev/aio/linux_aio_mgr.o 00:02:50.322 CC module/accel/iaa/accel_iaa_rpc.o 00:02:50.322 LIB libspdk_env_dpdk_rpc.a 00:02:50.322 SO libspdk_env_dpdk_rpc.so.6.0 00:02:50.580 SYMLINK libspdk_env_dpdk_rpc.so 00:02:50.580 LIB libspdk_keyring_linux.a 00:02:50.580 LIB libspdk_keyring_file.a 00:02:50.580 LIB libspdk_scheduler_gscheduler.a 00:02:50.580 LIB libspdk_scheduler_dpdk_governor.a 00:02:50.580 SO libspdk_keyring_linux.so.1.0 00:02:50.580 SO libspdk_keyring_file.so.2.0 00:02:50.580 SO libspdk_scheduler_gscheduler.so.4.0 00:02:50.580 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:50.580 LIB libspdk_accel_ioat.a 00:02:50.580 LIB libspdk_scheduler_dynamic.a 00:02:50.580 LIB libspdk_accel_error.a 00:02:50.580 SYMLINK libspdk_keyring_linux.so 00:02:50.580 SO libspdk_accel_ioat.so.6.0 00:02:50.580 SYMLINK libspdk_keyring_file.so 00:02:50.580 LIB libspdk_accel_iaa.a 00:02:50.580 SYMLINK libspdk_scheduler_gscheduler.so 00:02:50.580 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:50.580 SO libspdk_scheduler_dynamic.so.4.0 00:02:50.580 SO libspdk_accel_error.so.2.0 00:02:50.580 SO libspdk_accel_iaa.so.3.0 00:02:50.580 SYMLINK libspdk_accel_ioat.so 00:02:50.580 SYMLINK libspdk_scheduler_dynamic.so 00:02:50.580 SYMLINK libspdk_accel_error.so 00:02:50.580 SYMLINK libspdk_accel_iaa.so 00:02:50.580 LIB libspdk_blob_bdev.a 00:02:50.580 LIB libspdk_accel_dsa.a 00:02:50.580 SO libspdk_blob_bdev.so.11.0 00:02:50.839 SO libspdk_accel_dsa.so.5.0 00:02:50.839 SYMLINK libspdk_blob_bdev.so 00:02:50.839 SYMLINK libspdk_accel_dsa.so 00:02:51.098 CC module/bdev/null/bdev_null.o 00:02:51.098 CC module/bdev/error/vbdev_error.o 00:02:51.098 CC module/bdev/malloc/bdev_malloc.o 00:02:51.098 CC module/bdev/raid/bdev_raid.o 00:02:51.098 CC module/bdev/error/vbdev_error_rpc.o 00:02:51.098 CC module/bdev/null/bdev_null_rpc.o 00:02:51.098 CC module/bdev/lvol/vbdev_lvol.o 00:02:51.098 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:51.098 CC module/bdev/raid/bdev_raid_rpc.o 00:02:51.098 CC module/bdev/gpt/gpt.o 00:02:51.098 CC module/bdev/raid/bdev_raid_sb.o 00:02:51.098 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:51.098 CC module/bdev/nvme/bdev_nvme.o 00:02:51.098 CC module/bdev/split/vbdev_split.o 00:02:51.098 CC module/bdev/passthru/vbdev_passthru.o 00:02:51.098 CC module/bdev/gpt/vbdev_gpt.o 00:02:51.098 CC module/bdev/split/vbdev_split_rpc.o 00:02:51.098 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:51.098 CC module/bdev/raid/raid0.o 00:02:51.098 CC module/bdev/raid/raid1.o 00:02:51.098 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:51.098 CC module/bdev/nvme/nvme_rpc.o 00:02:51.098 CC module/bdev/delay/vbdev_delay.o 00:02:51.098 CC module/bdev/nvme/bdev_mdns_client.o 00:02:51.098 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:51.098 CC module/bdev/raid/concat.o 00:02:51.098 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:51.098 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:51.098 CC module/bdev/nvme/vbdev_opal.o 00:02:51.098 CC module/blobfs/bdev/blobfs_bdev.o 00:02:51.098 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:51.098 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:51.098 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:51.098 CC module/bdev/ftl/bdev_ftl.o 00:02:51.098 CC module/bdev/aio/bdev_aio.o 00:02:51.098 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:51.098 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:51.098 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:51.098 CC module/bdev/aio/bdev_aio_rpc.o 00:02:51.098 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:51.098 CC module/bdev/iscsi/bdev_iscsi.o 00:02:51.098 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:51.356 LIB libspdk_bdev_split.a 00:02:51.356 LIB libspdk_blobfs_bdev.a 00:02:51.356 SO libspdk_blobfs_bdev.so.6.0 00:02:51.356 LIB libspdk_fsdev_aio.a 00:02:51.356 SO libspdk_bdev_split.so.6.0 00:02:51.356 SO libspdk_fsdev_aio.so.1.0 00:02:51.614 SYMLINK libspdk_blobfs_bdev.so 00:02:51.614 LIB libspdk_bdev_passthru.a 00:02:51.614 SYMLINK libspdk_bdev_split.so 00:02:51.614 LIB libspdk_bdev_ftl.a 00:02:51.614 LIB libspdk_sock_posix.a 00:02:51.615 SO libspdk_bdev_passthru.so.6.0 00:02:51.615 LIB libspdk_bdev_error.a 00:02:51.615 SO libspdk_bdev_ftl.so.6.0 00:02:51.615 SO libspdk_sock_posix.so.6.0 00:02:51.615 SO libspdk_bdev_error.so.6.0 00:02:51.615 SYMLINK libspdk_fsdev_aio.so 00:02:51.615 SYMLINK libspdk_bdev_passthru.so 00:02:51.615 LIB libspdk_bdev_null.a 00:02:51.615 LIB libspdk_bdev_zone_block.a 00:02:51.615 LIB libspdk_bdev_gpt.a 00:02:51.615 SYMLINK libspdk_bdev_ftl.so 00:02:51.615 SYMLINK libspdk_bdev_error.so 00:02:51.615 LIB libspdk_bdev_aio.a 00:02:51.615 SO libspdk_bdev_null.so.6.0 00:02:51.615 SYMLINK libspdk_sock_posix.so 00:02:51.615 SO libspdk_bdev_zone_block.so.6.0 00:02:51.615 SO libspdk_bdev_gpt.so.6.0 00:02:51.615 SO libspdk_bdev_aio.so.6.0 00:02:51.615 LIB libspdk_bdev_iscsi.a 00:02:51.615 SYMLINK libspdk_bdev_gpt.so 00:02:51.615 SYMLINK libspdk_bdev_null.so 00:02:51.615 SYMLINK libspdk_bdev_zone_block.so 00:02:51.615 LIB libspdk_bdev_malloc.a 00:02:51.615 SO libspdk_bdev_iscsi.so.6.0 00:02:51.615 SYMLINK libspdk_bdev_aio.so 00:02:51.615 SO libspdk_bdev_malloc.so.6.0 00:02:51.615 LIB libspdk_bdev_delay.a 00:02:51.873 SO libspdk_bdev_delay.so.6.0 00:02:51.873 SYMLINK libspdk_bdev_iscsi.so 00:02:51.873 SYMLINK libspdk_bdev_malloc.so 00:02:51.873 SYMLINK libspdk_bdev_delay.so 00:02:51.873 LIB libspdk_bdev_lvol.a 00:02:51.873 LIB libspdk_bdev_virtio.a 00:02:51.873 SO libspdk_bdev_lvol.so.6.0 00:02:51.873 SO libspdk_bdev_virtio.so.6.0 00:02:51.873 SYMLINK libspdk_bdev_lvol.so 00:02:51.873 SYMLINK libspdk_bdev_virtio.so 00:02:52.438 LIB libspdk_bdev_raid.a 00:02:52.696 SO libspdk_bdev_raid.so.6.0 00:02:52.696 SYMLINK libspdk_bdev_raid.so 00:02:54.594 LIB libspdk_bdev_nvme.a 00:02:54.594 SO libspdk_bdev_nvme.so.7.1 00:02:54.594 SYMLINK libspdk_bdev_nvme.so 00:02:55.161 CC module/event/subsystems/iobuf/iobuf.o 00:02:55.161 CC module/event/subsystems/sock/sock.o 00:02:55.161 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:55.161 CC module/event/subsystems/vmd/vmd.o 00:02:55.161 CC module/event/subsystems/scheduler/scheduler.o 00:02:55.161 CC module/event/subsystems/keyring/keyring.o 00:02:55.161 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:55.161 CC module/event/subsystems/fsdev/fsdev.o 00:02:55.161 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:55.161 LIB libspdk_event_keyring.a 00:02:55.161 LIB libspdk_event_vhost_blk.a 00:02:55.161 LIB libspdk_event_fsdev.a 00:02:55.161 LIB libspdk_event_scheduler.a 00:02:55.161 LIB libspdk_event_sock.a 00:02:55.161 LIB libspdk_event_vmd.a 00:02:55.161 SO libspdk_event_keyring.so.1.0 00:02:55.161 SO libspdk_event_fsdev.so.1.0 00:02:55.161 SO libspdk_event_vhost_blk.so.3.0 00:02:55.161 SO libspdk_event_scheduler.so.4.0 00:02:55.161 LIB libspdk_event_iobuf.a 00:02:55.161 SO libspdk_event_sock.so.5.0 00:02:55.161 SO libspdk_event_vmd.so.6.0 00:02:55.161 SO libspdk_event_iobuf.so.3.0 00:02:55.161 SYMLINK libspdk_event_keyring.so 00:02:55.161 SYMLINK libspdk_event_fsdev.so 00:02:55.161 SYMLINK libspdk_event_vhost_blk.so 00:02:55.161 SYMLINK libspdk_event_scheduler.so 00:02:55.161 SYMLINK libspdk_event_sock.so 00:02:55.161 SYMLINK libspdk_event_vmd.so 00:02:55.161 SYMLINK libspdk_event_iobuf.so 00:02:55.419 CC module/event/subsystems/accel/accel.o 00:02:55.677 LIB libspdk_event_accel.a 00:02:55.677 SO libspdk_event_accel.so.6.0 00:02:55.677 SYMLINK libspdk_event_accel.so 00:02:55.935 CC module/event/subsystems/bdev/bdev.o 00:02:55.935 LIB libspdk_event_bdev.a 00:02:55.935 SO libspdk_event_bdev.so.6.0 00:02:56.194 SYMLINK libspdk_event_bdev.so 00:02:56.194 CC module/event/subsystems/ublk/ublk.o 00:02:56.194 CC module/event/subsystems/scsi/scsi.o 00:02:56.194 CC module/event/subsystems/nbd/nbd.o 00:02:56.194 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:56.194 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:56.452 LIB libspdk_event_ublk.a 00:02:56.452 LIB libspdk_event_nbd.a 00:02:56.452 SO libspdk_event_ublk.so.3.0 00:02:56.452 LIB libspdk_event_scsi.a 00:02:56.452 SO libspdk_event_nbd.so.6.0 00:02:56.452 SO libspdk_event_scsi.so.6.0 00:02:56.452 SYMLINK libspdk_event_ublk.so 00:02:56.452 SYMLINK libspdk_event_nbd.so 00:02:56.452 SYMLINK libspdk_event_scsi.so 00:02:56.452 LIB libspdk_event_nvmf.a 00:02:56.452 SO libspdk_event_nvmf.so.6.0 00:02:56.710 SYMLINK libspdk_event_nvmf.so 00:02:56.710 CC module/event/subsystems/iscsi/iscsi.o 00:02:56.710 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:56.710 LIB libspdk_event_vhost_scsi.a 00:02:56.710 LIB libspdk_event_iscsi.a 00:02:56.710 SO libspdk_event_vhost_scsi.so.3.0 00:02:56.969 SO libspdk_event_iscsi.so.6.0 00:02:56.969 SYMLINK libspdk_event_vhost_scsi.so 00:02:56.969 SYMLINK libspdk_event_iscsi.so 00:02:56.969 SO libspdk.so.6.0 00:02:56.969 SYMLINK libspdk.so 00:02:57.231 CXX app/trace/trace.o 00:02:57.231 CC app/spdk_nvme_identify/identify.o 00:02:57.231 CC app/spdk_lspci/spdk_lspci.o 00:02:57.231 CC app/spdk_nvme_perf/perf.o 00:02:57.231 CC app/trace_record/trace_record.o 00:02:57.231 CC app/spdk_top/spdk_top.o 00:02:57.231 CC app/spdk_nvme_discover/discovery_aer.o 00:02:57.231 CC test/rpc_client/rpc_client_test.o 00:02:57.231 TEST_HEADER include/spdk/accel.h 00:02:57.231 TEST_HEADER include/spdk/accel_module.h 00:02:57.231 TEST_HEADER include/spdk/assert.h 00:02:57.231 TEST_HEADER include/spdk/barrier.h 00:02:57.231 TEST_HEADER include/spdk/base64.h 00:02:57.231 TEST_HEADER include/spdk/bdev.h 00:02:57.231 TEST_HEADER include/spdk/bdev_module.h 00:02:57.231 TEST_HEADER include/spdk/bdev_zone.h 00:02:57.231 TEST_HEADER include/spdk/bit_array.h 00:02:57.231 TEST_HEADER include/spdk/bit_pool.h 00:02:57.231 TEST_HEADER include/spdk/blob_bdev.h 00:02:57.231 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:57.231 TEST_HEADER include/spdk/blobfs.h 00:02:57.231 TEST_HEADER include/spdk/blob.h 00:02:57.231 TEST_HEADER include/spdk/conf.h 00:02:57.231 TEST_HEADER include/spdk/config.h 00:02:57.231 TEST_HEADER include/spdk/crc16.h 00:02:57.231 TEST_HEADER include/spdk/cpuset.h 00:02:57.231 TEST_HEADER include/spdk/crc32.h 00:02:57.231 TEST_HEADER include/spdk/crc64.h 00:02:57.231 TEST_HEADER include/spdk/dif.h 00:02:57.231 TEST_HEADER include/spdk/dma.h 00:02:57.231 TEST_HEADER include/spdk/env_dpdk.h 00:02:57.231 TEST_HEADER include/spdk/endian.h 00:02:57.231 TEST_HEADER include/spdk/env.h 00:02:57.231 TEST_HEADER include/spdk/event.h 00:02:57.231 TEST_HEADER include/spdk/fd_group.h 00:02:57.231 TEST_HEADER include/spdk/file.h 00:02:57.231 TEST_HEADER include/spdk/fd.h 00:02:57.231 TEST_HEADER include/spdk/fsdev.h 00:02:57.231 TEST_HEADER include/spdk/fsdev_module.h 00:02:57.231 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:57.231 TEST_HEADER include/spdk/ftl.h 00:02:57.231 TEST_HEADER include/spdk/gpt_spec.h 00:02:57.231 TEST_HEADER include/spdk/hexlify.h 00:02:57.231 TEST_HEADER include/spdk/histogram_data.h 00:02:57.231 TEST_HEADER include/spdk/idxd.h 00:02:57.231 TEST_HEADER include/spdk/idxd_spec.h 00:02:57.231 TEST_HEADER include/spdk/init.h 00:02:57.231 TEST_HEADER include/spdk/ioat.h 00:02:57.231 TEST_HEADER include/spdk/ioat_spec.h 00:02:57.231 TEST_HEADER include/spdk/iscsi_spec.h 00:02:57.231 TEST_HEADER include/spdk/json.h 00:02:57.231 TEST_HEADER include/spdk/jsonrpc.h 00:02:57.231 TEST_HEADER include/spdk/keyring.h 00:02:57.231 TEST_HEADER include/spdk/keyring_module.h 00:02:57.231 TEST_HEADER include/spdk/likely.h 00:02:57.231 TEST_HEADER include/spdk/log.h 00:02:57.231 TEST_HEADER include/spdk/lvol.h 00:02:57.231 TEST_HEADER include/spdk/md5.h 00:02:57.231 TEST_HEADER include/spdk/memory.h 00:02:57.231 TEST_HEADER include/spdk/mmio.h 00:02:57.231 TEST_HEADER include/spdk/nbd.h 00:02:57.231 TEST_HEADER include/spdk/net.h 00:02:57.231 TEST_HEADER include/spdk/notify.h 00:02:57.231 TEST_HEADER include/spdk/nvme.h 00:02:57.231 TEST_HEADER include/spdk/nvme_intel.h 00:02:57.231 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:57.231 TEST_HEADER include/spdk/nvme_spec.h 00:02:57.231 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:57.231 TEST_HEADER include/spdk/nvme_zns.h 00:02:57.231 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:57.231 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:57.231 TEST_HEADER include/spdk/nvmf.h 00:02:57.231 TEST_HEADER include/spdk/nvmf_spec.h 00:02:57.231 TEST_HEADER include/spdk/nvmf_transport.h 00:02:57.231 TEST_HEADER include/spdk/opal.h 00:02:57.231 TEST_HEADER include/spdk/opal_spec.h 00:02:57.231 TEST_HEADER include/spdk/pci_ids.h 00:02:57.231 TEST_HEADER include/spdk/pipe.h 00:02:57.231 TEST_HEADER include/spdk/queue.h 00:02:57.231 TEST_HEADER include/spdk/reduce.h 00:02:57.231 TEST_HEADER include/spdk/scheduler.h 00:02:57.231 TEST_HEADER include/spdk/rpc.h 00:02:57.231 TEST_HEADER include/spdk/scsi.h 00:02:57.231 TEST_HEADER include/spdk/scsi_spec.h 00:02:57.231 TEST_HEADER include/spdk/sock.h 00:02:57.231 TEST_HEADER include/spdk/stdinc.h 00:02:57.231 TEST_HEADER include/spdk/string.h 00:02:57.231 TEST_HEADER include/spdk/thread.h 00:02:57.231 TEST_HEADER include/spdk/trace.h 00:02:57.231 TEST_HEADER include/spdk/trace_parser.h 00:02:57.231 TEST_HEADER include/spdk/tree.h 00:02:57.231 TEST_HEADER include/spdk/ublk.h 00:02:57.231 TEST_HEADER include/spdk/util.h 00:02:57.231 TEST_HEADER include/spdk/uuid.h 00:02:57.231 TEST_HEADER include/spdk/version.h 00:02:57.231 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:57.231 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:57.231 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:57.231 TEST_HEADER include/spdk/vhost.h 00:02:57.232 TEST_HEADER include/spdk/vmd.h 00:02:57.232 TEST_HEADER include/spdk/xor.h 00:02:57.232 TEST_HEADER include/spdk/zipf.h 00:02:57.232 CC app/spdk_dd/spdk_dd.o 00:02:57.232 CXX test/cpp_headers/accel.o 00:02:57.232 CXX test/cpp_headers/accel_module.o 00:02:57.232 CXX test/cpp_headers/assert.o 00:02:57.232 CC app/iscsi_tgt/iscsi_tgt.o 00:02:57.232 CXX test/cpp_headers/barrier.o 00:02:57.232 CXX test/cpp_headers/base64.o 00:02:57.232 CXX test/cpp_headers/bdev.o 00:02:57.232 CXX test/cpp_headers/bdev_module.o 00:02:57.232 CXX test/cpp_headers/bdev_zone.o 00:02:57.232 CXX test/cpp_headers/bit_array.o 00:02:57.232 CXX test/cpp_headers/bit_pool.o 00:02:57.232 CXX test/cpp_headers/blob_bdev.o 00:02:57.232 CXX test/cpp_headers/blobfs_bdev.o 00:02:57.232 CXX test/cpp_headers/blobfs.o 00:02:57.232 CXX test/cpp_headers/blob.o 00:02:57.232 CXX test/cpp_headers/conf.o 00:02:57.232 CXX test/cpp_headers/config.o 00:02:57.232 CXX test/cpp_headers/cpuset.o 00:02:57.232 CXX test/cpp_headers/crc16.o 00:02:57.232 CC app/nvmf_tgt/nvmf_main.o 00:02:57.232 CXX test/cpp_headers/crc32.o 00:02:57.232 CC examples/ioat/verify/verify.o 00:02:57.232 CC examples/util/zipf/zipf.o 00:02:57.232 CC test/thread/poller_perf/poller_perf.o 00:02:57.232 CC test/app/jsoncat/jsoncat.o 00:02:57.232 CC examples/ioat/perf/perf.o 00:02:57.232 CC app/spdk_tgt/spdk_tgt.o 00:02:57.232 CC test/env/memory/memory_ut.o 00:02:57.232 CC app/fio/nvme/fio_plugin.o 00:02:57.232 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:57.232 CC test/app/histogram_perf/histogram_perf.o 00:02:57.232 CC test/env/pci/pci_ut.o 00:02:57.232 CC test/env/vtophys/vtophys.o 00:02:57.232 CC test/app/stub/stub.o 00:02:57.495 CC test/dma/test_dma/test_dma.o 00:02:57.495 CC app/fio/bdev/fio_plugin.o 00:02:57.495 CC test/app/bdev_svc/bdev_svc.o 00:02:57.495 LINK spdk_lspci 00:02:57.495 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:57.495 CC test/env/mem_callbacks/mem_callbacks.o 00:02:57.758 LINK rpc_client_test 00:02:57.758 LINK jsoncat 00:02:57.758 LINK spdk_nvme_discover 00:02:57.758 LINK poller_perf 00:02:57.758 LINK histogram_perf 00:02:57.758 LINK zipf 00:02:57.758 LINK interrupt_tgt 00:02:57.758 LINK vtophys 00:02:57.758 LINK env_dpdk_post_init 00:02:57.758 CXX test/cpp_headers/crc64.o 00:02:57.758 CXX test/cpp_headers/dif.o 00:02:57.758 CXX test/cpp_headers/dma.o 00:02:57.758 CXX test/cpp_headers/env_dpdk.o 00:02:57.758 CXX test/cpp_headers/endian.o 00:02:57.758 CXX test/cpp_headers/env.o 00:02:57.758 LINK nvmf_tgt 00:02:57.758 CXX test/cpp_headers/event.o 00:02:57.758 CXX test/cpp_headers/fd_group.o 00:02:57.758 CXX test/cpp_headers/fd.o 00:02:57.758 CXX test/cpp_headers/file.o 00:02:57.758 LINK iscsi_tgt 00:02:57.758 CXX test/cpp_headers/fsdev.o 00:02:57.758 CXX test/cpp_headers/fsdev_module.o 00:02:57.758 LINK stub 00:02:57.758 CXX test/cpp_headers/ftl.o 00:02:57.758 LINK spdk_trace_record 00:02:57.758 LINK bdev_svc 00:02:57.758 CXX test/cpp_headers/fuse_dispatcher.o 00:02:57.758 LINK spdk_tgt 00:02:57.758 CXX test/cpp_headers/gpt_spec.o 00:02:57.758 CXX test/cpp_headers/hexlify.o 00:02:57.758 LINK verify 00:02:57.758 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:57.758 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:57.758 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:57.758 LINK ioat_perf 00:02:58.024 CXX test/cpp_headers/histogram_data.o 00:02:58.024 CXX test/cpp_headers/idxd.o 00:02:58.024 CXX test/cpp_headers/idxd_spec.o 00:02:58.024 CXX test/cpp_headers/init.o 00:02:58.024 CXX test/cpp_headers/ioat.o 00:02:58.024 CXX test/cpp_headers/ioat_spec.o 00:02:58.024 CXX test/cpp_headers/iscsi_spec.o 00:02:58.024 LINK spdk_dd 00:02:58.024 CXX test/cpp_headers/json.o 00:02:58.024 LINK spdk_trace 00:02:58.024 CXX test/cpp_headers/jsonrpc.o 00:02:58.286 CXX test/cpp_headers/keyring.o 00:02:58.286 CXX test/cpp_headers/keyring_module.o 00:02:58.286 CXX test/cpp_headers/likely.o 00:02:58.286 CXX test/cpp_headers/log.o 00:02:58.286 CXX test/cpp_headers/lvol.o 00:02:58.286 CXX test/cpp_headers/md5.o 00:02:58.286 CXX test/cpp_headers/memory.o 00:02:58.286 CXX test/cpp_headers/mmio.o 00:02:58.286 CXX test/cpp_headers/nbd.o 00:02:58.286 CXX test/cpp_headers/net.o 00:02:58.286 CXX test/cpp_headers/notify.o 00:02:58.286 CXX test/cpp_headers/nvme.o 00:02:58.286 CXX test/cpp_headers/nvme_intel.o 00:02:58.286 CXX test/cpp_headers/nvme_ocssd.o 00:02:58.286 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:58.286 CXX test/cpp_headers/nvme_spec.o 00:02:58.286 CC test/event/reactor/reactor.o 00:02:58.286 CC test/event/event_perf/event_perf.o 00:02:58.286 CC test/event/reactor_perf/reactor_perf.o 00:02:58.286 CXX test/cpp_headers/nvme_zns.o 00:02:58.286 LINK pci_ut 00:02:58.286 CC test/event/app_repeat/app_repeat.o 00:02:58.286 CXX test/cpp_headers/nvmf_cmd.o 00:02:58.286 CC examples/sock/hello_world/hello_sock.o 00:02:58.286 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:58.286 CXX test/cpp_headers/nvmf.o 00:02:58.286 CXX test/cpp_headers/nvmf_spec.o 00:02:58.286 CXX test/cpp_headers/nvmf_transport.o 00:02:58.286 CC test/event/scheduler/scheduler.o 00:02:58.550 CC examples/thread/thread/thread_ex.o 00:02:58.550 CC examples/idxd/perf/perf.o 00:02:58.550 CC examples/vmd/lsvmd/lsvmd.o 00:02:58.550 CXX test/cpp_headers/opal.o 00:02:58.550 LINK nvme_fuzz 00:02:58.550 CXX test/cpp_headers/opal_spec.o 00:02:58.550 CC examples/vmd/led/led.o 00:02:58.550 CXX test/cpp_headers/pci_ids.o 00:02:58.550 CXX test/cpp_headers/pipe.o 00:02:58.550 LINK test_dma 00:02:58.550 CXX test/cpp_headers/queue.o 00:02:58.550 CXX test/cpp_headers/reduce.o 00:02:58.550 CXX test/cpp_headers/rpc.o 00:02:58.550 CXX test/cpp_headers/scheduler.o 00:02:58.550 CXX test/cpp_headers/scsi.o 00:02:58.550 CXX test/cpp_headers/scsi_spec.o 00:02:58.550 CXX test/cpp_headers/sock.o 00:02:58.550 CXX test/cpp_headers/stdinc.o 00:02:58.550 LINK reactor 00:02:58.550 CXX test/cpp_headers/string.o 00:02:58.550 LINK reactor_perf 00:02:58.550 LINK spdk_bdev 00:02:58.550 LINK event_perf 00:02:58.550 CXX test/cpp_headers/thread.o 00:02:58.809 LINK app_repeat 00:02:58.809 CXX test/cpp_headers/trace.o 00:02:58.809 CXX test/cpp_headers/trace_parser.o 00:02:58.809 LINK spdk_nvme 00:02:58.809 CXX test/cpp_headers/tree.o 00:02:58.809 CXX test/cpp_headers/ublk.o 00:02:58.809 LINK lsvmd 00:02:58.809 CXX test/cpp_headers/util.o 00:02:58.809 LINK mem_callbacks 00:02:58.809 CXX test/cpp_headers/uuid.o 00:02:58.809 CXX test/cpp_headers/version.o 00:02:58.809 CXX test/cpp_headers/vfio_user_pci.o 00:02:58.809 CC app/vhost/vhost.o 00:02:58.809 CXX test/cpp_headers/vfio_user_spec.o 00:02:58.809 CXX test/cpp_headers/vhost.o 00:02:58.809 CXX test/cpp_headers/vmd.o 00:02:58.809 CXX test/cpp_headers/xor.o 00:02:58.809 CXX test/cpp_headers/zipf.o 00:02:58.809 LINK vhost_fuzz 00:02:58.809 LINK led 00:02:58.809 LINK scheduler 00:02:59.069 LINK thread 00:02:59.069 LINK hello_sock 00:02:59.069 LINK spdk_nvme_perf 00:02:59.069 CC test/nvme/sgl/sgl.o 00:02:59.069 CC test/nvme/reserve/reserve.o 00:02:59.069 CC test/nvme/e2edp/nvme_dp.o 00:02:59.069 CC test/nvme/connect_stress/connect_stress.o 00:02:59.069 CC test/nvme/startup/startup.o 00:02:59.069 CC test/nvme/aer/aer.o 00:02:59.069 CC test/nvme/simple_copy/simple_copy.o 00:02:59.069 CC test/nvme/reset/reset.o 00:02:59.069 CC test/nvme/overhead/overhead.o 00:02:59.069 LINK vhost 00:02:59.069 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:59.069 CC test/nvme/err_injection/err_injection.o 00:02:59.069 CC test/nvme/compliance/nvme_compliance.o 00:02:59.069 CC test/nvme/cuse/cuse.o 00:02:59.069 CC test/nvme/fdp/fdp.o 00:02:59.069 CC test/nvme/boot_partition/boot_partition.o 00:02:59.069 CC test/nvme/fused_ordering/fused_ordering.o 00:02:59.327 LINK idxd_perf 00:02:59.327 CC test/blobfs/mkfs/mkfs.o 00:02:59.327 CC test/accel/dif/dif.o 00:02:59.327 LINK spdk_nvme_identify 00:02:59.327 CC test/lvol/esnap/esnap.o 00:02:59.327 LINK spdk_top 00:02:59.327 LINK boot_partition 00:02:59.584 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:59.584 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:59.584 CC examples/nvme/arbitration/arbitration.o 00:02:59.584 CC examples/nvme/abort/abort.o 00:02:59.584 CC examples/nvme/hotplug/hotplug.o 00:02:59.584 CC examples/nvme/hello_world/hello_world.o 00:02:59.584 CC examples/nvme/reconnect/reconnect.o 00:02:59.584 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:59.584 LINK doorbell_aers 00:02:59.584 LINK connect_stress 00:02:59.584 LINK mkfs 00:02:59.584 LINK reserve 00:02:59.584 LINK fused_ordering 00:02:59.584 CC examples/accel/perf/accel_perf.o 00:02:59.584 LINK simple_copy 00:02:59.584 LINK startup 00:02:59.584 LINK err_injection 00:02:59.584 CC examples/blob/hello_world/hello_blob.o 00:02:59.584 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:59.584 CC examples/blob/cli/blobcli.o 00:02:59.584 LINK nvme_dp 00:02:59.842 LINK fdp 00:02:59.842 LINK reset 00:02:59.842 LINK pmr_persistence 00:02:59.842 LINK sgl 00:02:59.842 LINK aer 00:02:59.842 LINK hotplug 00:02:59.842 LINK overhead 00:02:59.842 LINK cmb_copy 00:02:59.842 LINK nvme_compliance 00:02:59.842 LINK hello_blob 00:02:59.842 LINK hello_world 00:02:59.842 LINK memory_ut 00:02:59.842 LINK hello_fsdev 00:03:00.099 LINK abort 00:03:00.099 LINK arbitration 00:03:00.099 LINK reconnect 00:03:00.099 LINK nvme_manage 00:03:00.356 LINK dif 00:03:00.356 LINK accel_perf 00:03:00.356 LINK blobcli 00:03:00.613 CC test/bdev/bdevio/bdevio.o 00:03:00.613 CC examples/bdev/hello_world/hello_bdev.o 00:03:00.613 CC examples/bdev/bdevperf/bdevperf.o 00:03:00.870 LINK hello_bdev 00:03:00.870 LINK iscsi_fuzz 00:03:01.128 LINK bdevio 00:03:01.128 LINK cuse 00:03:01.694 LINK bdevperf 00:03:01.952 CC examples/nvmf/nvmf/nvmf.o 00:03:02.519 LINK nvmf 00:03:06.719 LINK esnap 00:03:06.719 00:03:06.719 real 1m20.462s 00:03:06.719 user 13m9.794s 00:03:06.719 sys 2m33.770s 00:03:06.719 11:31:32 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:06.719 11:31:32 make -- common/autotest_common.sh@10 -- $ set +x 00:03:06.719 ************************************ 00:03:06.719 END TEST make 00:03:06.719 ************************************ 00:03:06.719 11:31:32 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:06.719 11:31:32 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:06.719 11:31:32 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:06.719 11:31:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.719 11:31:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:06.719 11:31:32 -- pm/common@44 -- $ pid=2740160 00:03:06.719 11:31:32 -- pm/common@50 -- $ kill -TERM 2740160 00:03:06.719 11:31:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.719 11:31:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:06.719 11:31:32 -- pm/common@44 -- $ pid=2740162 00:03:06.719 11:31:32 -- pm/common@50 -- $ kill -TERM 2740162 00:03:06.719 11:31:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.719 11:31:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:06.719 11:31:32 -- pm/common@44 -- $ pid=2740164 00:03:06.719 11:31:32 -- pm/common@50 -- $ kill -TERM 2740164 00:03:06.719 11:31:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.719 11:31:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:06.719 11:31:32 -- pm/common@44 -- $ pid=2740197 00:03:06.719 11:31:32 -- pm/common@50 -- $ sudo -E kill -TERM 2740197 00:03:06.719 11:31:32 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:06.719 11:31:32 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:06.719 11:31:32 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:06.719 11:31:32 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:06.719 11:31:32 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:06.719 11:31:32 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:06.719 11:31:32 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:06.719 11:31:32 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:06.719 11:31:32 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:06.719 11:31:32 -- scripts/common.sh@336 -- # IFS=.-: 00:03:06.719 11:31:32 -- scripts/common.sh@336 -- # read -ra ver1 00:03:06.719 11:31:32 -- scripts/common.sh@337 -- # IFS=.-: 00:03:06.719 11:31:32 -- scripts/common.sh@337 -- # read -ra ver2 00:03:06.719 11:31:32 -- scripts/common.sh@338 -- # local 'op=<' 00:03:06.719 11:31:32 -- scripts/common.sh@340 -- # ver1_l=2 00:03:06.719 11:31:32 -- scripts/common.sh@341 -- # ver2_l=1 00:03:06.719 11:31:32 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:06.719 11:31:32 -- scripts/common.sh@344 -- # case "$op" in 00:03:06.719 11:31:32 -- scripts/common.sh@345 -- # : 1 00:03:06.719 11:31:32 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:06.719 11:31:32 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:06.719 11:31:32 -- scripts/common.sh@365 -- # decimal 1 00:03:06.719 11:31:32 -- scripts/common.sh@353 -- # local d=1 00:03:06.719 11:31:32 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:06.719 11:31:32 -- scripts/common.sh@355 -- # echo 1 00:03:06.719 11:31:32 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:06.719 11:31:32 -- scripts/common.sh@366 -- # decimal 2 00:03:06.719 11:31:32 -- scripts/common.sh@353 -- # local d=2 00:03:06.719 11:31:32 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:06.719 11:31:32 -- scripts/common.sh@355 -- # echo 2 00:03:06.719 11:31:32 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:06.719 11:31:32 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:06.719 11:31:32 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:06.719 11:31:32 -- scripts/common.sh@368 -- # return 0 00:03:06.719 11:31:32 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:06.719 11:31:32 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:06.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:06.719 --rc genhtml_branch_coverage=1 00:03:06.719 --rc genhtml_function_coverage=1 00:03:06.719 --rc genhtml_legend=1 00:03:06.719 --rc geninfo_all_blocks=1 00:03:06.719 --rc geninfo_unexecuted_blocks=1 00:03:06.719 00:03:06.719 ' 00:03:06.719 11:31:32 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:06.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:06.719 --rc genhtml_branch_coverage=1 00:03:06.719 --rc genhtml_function_coverage=1 00:03:06.719 --rc genhtml_legend=1 00:03:06.719 --rc geninfo_all_blocks=1 00:03:06.719 --rc geninfo_unexecuted_blocks=1 00:03:06.719 00:03:06.719 ' 00:03:06.719 11:31:32 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:06.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:06.719 --rc genhtml_branch_coverage=1 00:03:06.719 --rc genhtml_function_coverage=1 00:03:06.719 --rc genhtml_legend=1 00:03:06.719 --rc geninfo_all_blocks=1 00:03:06.719 --rc geninfo_unexecuted_blocks=1 00:03:06.719 00:03:06.719 ' 00:03:06.719 11:31:32 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:06.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:06.719 --rc genhtml_branch_coverage=1 00:03:06.719 --rc genhtml_function_coverage=1 00:03:06.719 --rc genhtml_legend=1 00:03:06.719 --rc geninfo_all_blocks=1 00:03:06.719 --rc geninfo_unexecuted_blocks=1 00:03:06.719 00:03:06.719 ' 00:03:06.719 11:31:32 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:06.719 11:31:32 -- nvmf/common.sh@7 -- # uname -s 00:03:06.719 11:31:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:06.719 11:31:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:06.719 11:31:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:06.719 11:31:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:06.719 11:31:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:06.719 11:31:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:06.719 11:31:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:06.719 11:31:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:06.719 11:31:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:06.719 11:31:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:06.719 11:31:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:06.719 11:31:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:06.719 11:31:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:06.719 11:31:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:06.719 11:31:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:06.719 11:31:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:06.719 11:31:32 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:06.719 11:31:32 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:06.719 11:31:32 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:06.719 11:31:32 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:06.719 11:31:32 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:06.719 11:31:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:06.719 11:31:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:06.719 11:31:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:06.719 11:31:32 -- paths/export.sh@5 -- # export PATH 00:03:06.719 11:31:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:06.720 11:31:32 -- nvmf/common.sh@51 -- # : 0 00:03:06.720 11:31:32 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:06.720 11:31:32 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:06.720 11:31:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:06.720 11:31:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:06.720 11:31:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:06.720 11:31:32 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:06.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:06.720 11:31:32 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:06.720 11:31:32 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:06.720 11:31:32 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:06.720 11:31:32 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:06.720 11:31:32 -- spdk/autotest.sh@32 -- # uname -s 00:03:06.720 11:31:32 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:06.720 11:31:32 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:06.720 11:31:32 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:06.720 11:31:32 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:06.720 11:31:32 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:06.720 11:31:32 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:06.720 11:31:32 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:06.720 11:31:32 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:06.720 11:31:32 -- spdk/autotest.sh@48 -- # udevadm_pid=2800979 00:03:06.720 11:31:32 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:06.720 11:31:32 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:06.720 11:31:32 -- pm/common@17 -- # local monitor 00:03:06.720 11:31:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.720 11:31:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.720 11:31:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.720 11:31:32 -- pm/common@21 -- # date +%s 00:03:06.720 11:31:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.720 11:31:32 -- pm/common@21 -- # date +%s 00:03:06.720 11:31:32 -- pm/common@25 -- # sleep 1 00:03:06.720 11:31:32 -- pm/common@21 -- # date +%s 00:03:06.720 11:31:32 -- pm/common@21 -- # date +%s 00:03:06.720 11:31:32 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731925892 00:03:06.720 11:31:32 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731925892 00:03:06.720 11:31:32 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731925892 00:03:06.720 11:31:32 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731925892 00:03:06.720 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731925892_collect-vmstat.pm.log 00:03:06.720 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731925892_collect-cpu-load.pm.log 00:03:06.720 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731925892_collect-cpu-temp.pm.log 00:03:06.720 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731925892_collect-bmc-pm.bmc.pm.log 00:03:07.656 11:31:33 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:07.656 11:31:33 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:07.656 11:31:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:07.656 11:31:33 -- common/autotest_common.sh@10 -- # set +x 00:03:07.656 11:31:33 -- spdk/autotest.sh@59 -- # create_test_list 00:03:07.656 11:31:33 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:07.656 11:31:33 -- common/autotest_common.sh@10 -- # set +x 00:03:07.656 11:31:33 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:07.656 11:31:33 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:07.656 11:31:33 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:07.656 11:31:33 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:07.656 11:31:33 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:07.656 11:31:33 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:07.656 11:31:33 -- common/autotest_common.sh@1457 -- # uname 00:03:07.656 11:31:33 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:07.656 11:31:33 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:07.656 11:31:33 -- common/autotest_common.sh@1477 -- # uname 00:03:07.656 11:31:33 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:07.656 11:31:33 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:07.656 11:31:33 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:07.913 lcov: LCOV version 1.15 00:03:07.913 11:31:33 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:46.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:46.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:53.169 11:32:18 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:53.169 11:32:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:53.169 11:32:18 -- common/autotest_common.sh@10 -- # set +x 00:03:53.169 11:32:18 -- spdk/autotest.sh@78 -- # rm -f 00:03:53.169 11:32:18 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:54.104 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:54.105 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:54.105 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:54.363 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:54.363 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:54.363 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:54.363 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:54.363 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:54.363 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:54.363 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:54.363 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:54.363 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:54.363 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:54.363 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:54.363 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:54.363 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:54.363 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:54.621 11:32:20 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:54.621 11:32:20 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:54.621 11:32:20 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:54.621 11:32:20 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:54.621 11:32:20 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:54.621 11:32:20 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:54.622 11:32:20 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:54.622 11:32:20 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:54.622 11:32:20 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:54.622 11:32:20 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:54.622 11:32:20 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:54.622 11:32:20 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:54.622 11:32:20 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:54.622 11:32:20 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:54.622 11:32:20 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:54.622 No valid GPT data, bailing 00:03:54.622 11:32:20 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:54.622 11:32:20 -- scripts/common.sh@394 -- # pt= 00:03:54.622 11:32:20 -- scripts/common.sh@395 -- # return 1 00:03:54.622 11:32:20 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:54.622 1+0 records in 00:03:54.622 1+0 records out 00:03:54.622 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00304537 s, 344 MB/s 00:03:54.622 11:32:20 -- spdk/autotest.sh@105 -- # sync 00:03:54.622 11:32:20 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:54.622 11:32:20 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:54.622 11:32:20 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:56.523 11:32:22 -- spdk/autotest.sh@111 -- # uname -s 00:03:56.523 11:32:22 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:56.523 11:32:22 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:56.523 11:32:22 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:57.897 Hugepages 00:03:57.897 node hugesize free / total 00:03:57.897 node0 1048576kB 0 / 0 00:03:57.897 node0 2048kB 0 / 0 00:03:57.897 node1 1048576kB 0 / 0 00:03:57.897 node1 2048kB 0 / 0 00:03:57.897 00:03:57.897 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:57.897 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:57.897 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:57.897 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:57.897 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:57.897 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:57.897 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:57.897 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:57.897 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:57.897 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:57.897 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:57.897 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:57.897 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:57.897 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:57.898 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:57.898 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:57.898 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:57.898 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:57.898 11:32:23 -- spdk/autotest.sh@117 -- # uname -s 00:03:57.898 11:32:23 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:57.898 11:32:23 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:57.898 11:32:23 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:59.323 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:59.323 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:59.323 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:59.323 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:59.323 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:59.323 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:59.323 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:59.323 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:59.323 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:59.323 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:59.323 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:59.324 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:59.324 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:59.324 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:59.324 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:59.324 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:00.287 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:00.287 11:32:25 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:01.224 11:32:26 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:01.224 11:32:26 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:01.224 11:32:26 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:01.224 11:32:26 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:01.224 11:32:26 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:01.224 11:32:26 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:01.224 11:32:27 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:01.224 11:32:27 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:01.224 11:32:27 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:01.224 11:32:27 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:01.224 11:32:27 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:04:01.224 11:32:27 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:02.600 Waiting for block devices as requested 00:04:02.600 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:04:02.600 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:02.859 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:02.859 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:02.859 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:02.859 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:03.117 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:03.117 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:03.117 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:03.117 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:03.375 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:03.375 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:03.375 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:03.375 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:03.633 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:03.633 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:03.633 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:03.891 11:32:29 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:03.891 11:32:29 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:04:03.891 11:32:29 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:03.891 11:32:29 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:04:03.891 11:32:29 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:03.891 11:32:29 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:04:03.891 11:32:29 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:03.891 11:32:29 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:03.891 11:32:29 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:03.891 11:32:29 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:03.891 11:32:29 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:03.891 11:32:29 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:03.891 11:32:29 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:03.891 11:32:29 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:04:03.891 11:32:29 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:03.891 11:32:29 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:03.891 11:32:29 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:03.891 11:32:29 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:03.892 11:32:29 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:03.892 11:32:29 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:03.892 11:32:29 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:03.892 11:32:29 -- common/autotest_common.sh@1543 -- # continue 00:04:03.892 11:32:29 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:03.892 11:32:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:03.892 11:32:29 -- common/autotest_common.sh@10 -- # set +x 00:04:03.892 11:32:29 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:03.892 11:32:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:03.892 11:32:29 -- common/autotest_common.sh@10 -- # set +x 00:04:03.892 11:32:29 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:05.268 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:05.268 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:05.268 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:05.268 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:05.268 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:05.268 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:05.268 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:05.268 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:05.268 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:05.268 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:05.268 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:05.268 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:05.268 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:05.268 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:05.268 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:05.268 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:06.207 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:06.207 11:32:31 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:06.207 11:32:31 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:06.207 11:32:31 -- common/autotest_common.sh@10 -- # set +x 00:04:06.207 11:32:31 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:06.207 11:32:31 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:06.207 11:32:31 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:06.207 11:32:31 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:06.207 11:32:31 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:06.207 11:32:31 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:06.207 11:32:31 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:06.207 11:32:31 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:06.207 11:32:31 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:06.207 11:32:31 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:06.207 11:32:31 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:06.207 11:32:31 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:06.207 11:32:31 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:06.207 11:32:32 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:06.207 11:32:32 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:04:06.207 11:32:32 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:06.207 11:32:32 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:04:06.207 11:32:32 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:06.207 11:32:32 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:06.207 11:32:32 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:06.207 11:32:32 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:06.207 11:32:32 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:88:00.0 00:04:06.207 11:32:32 -- common/autotest_common.sh@1579 -- # [[ -z 0000:88:00.0 ]] 00:04:06.207 11:32:32 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=2812478 00:04:06.207 11:32:32 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:06.207 11:32:32 -- common/autotest_common.sh@1585 -- # waitforlisten 2812478 00:04:06.207 11:32:32 -- common/autotest_common.sh@835 -- # '[' -z 2812478 ']' 00:04:06.207 11:32:32 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:06.207 11:32:32 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:06.207 11:32:32 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:06.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:06.207 11:32:32 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:06.207 11:32:32 -- common/autotest_common.sh@10 -- # set +x 00:04:06.465 [2024-11-18 11:32:32.177214] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:06.465 [2024-11-18 11:32:32.177364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2812478 ] 00:04:06.465 [2024-11-18 11:32:32.321502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.723 [2024-11-18 11:32:32.459408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.658 11:32:33 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:07.658 11:32:33 -- common/autotest_common.sh@868 -- # return 0 00:04:07.658 11:32:33 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:07.658 11:32:33 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:07.658 11:32:33 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:04:10.941 nvme0n1 00:04:10.941 11:32:36 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:10.941 [2024-11-18 11:32:36.819045] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:10.941 [2024-11-18 11:32:36.819120] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:10.941 request: 00:04:10.941 { 00:04:10.941 "nvme_ctrlr_name": "nvme0", 00:04:10.941 "password": "test", 00:04:10.941 "method": "bdev_nvme_opal_revert", 00:04:10.941 "req_id": 1 00:04:10.941 } 00:04:10.941 Got JSON-RPC error response 00:04:10.941 response: 00:04:10.941 { 00:04:10.941 "code": -32603, 00:04:10.941 "message": "Internal error" 00:04:10.941 } 00:04:11.199 11:32:36 -- common/autotest_common.sh@1591 -- # true 00:04:11.199 11:32:36 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:11.199 11:32:36 -- common/autotest_common.sh@1595 -- # killprocess 2812478 00:04:11.199 11:32:36 -- common/autotest_common.sh@954 -- # '[' -z 2812478 ']' 00:04:11.199 11:32:36 -- common/autotest_common.sh@958 -- # kill -0 2812478 00:04:11.199 11:32:36 -- common/autotest_common.sh@959 -- # uname 00:04:11.199 11:32:36 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:11.199 11:32:36 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2812478 00:04:11.199 11:32:36 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:11.199 11:32:36 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:11.199 11:32:36 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2812478' 00:04:11.199 killing process with pid 2812478 00:04:11.199 11:32:36 -- common/autotest_common.sh@973 -- # kill 2812478 00:04:11.199 11:32:36 -- common/autotest_common.sh@978 -- # wait 2812478 00:04:15.385 11:32:40 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:15.385 11:32:40 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:15.385 11:32:40 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:15.385 11:32:40 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:15.385 11:32:40 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:15.385 11:32:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:15.385 11:32:40 -- common/autotest_common.sh@10 -- # set +x 00:04:15.385 11:32:40 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:15.385 11:32:40 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:15.385 11:32:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.385 11:32:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.385 11:32:40 -- common/autotest_common.sh@10 -- # set +x 00:04:15.385 ************************************ 00:04:15.385 START TEST env 00:04:15.385 ************************************ 00:04:15.385 11:32:40 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:15.385 * Looking for test storage... 00:04:15.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:15.385 11:32:40 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:15.385 11:32:40 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:15.385 11:32:40 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:15.385 11:32:40 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:15.385 11:32:40 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.385 11:32:40 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.385 11:32:40 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.385 11:32:40 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.385 11:32:40 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.385 11:32:40 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.385 11:32:40 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.385 11:32:40 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.385 11:32:40 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.385 11:32:40 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.385 11:32:40 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.385 11:32:40 env -- scripts/common.sh@344 -- # case "$op" in 00:04:15.385 11:32:40 env -- scripts/common.sh@345 -- # : 1 00:04:15.385 11:32:40 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.385 11:32:40 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.385 11:32:40 env -- scripts/common.sh@365 -- # decimal 1 00:04:15.385 11:32:40 env -- scripts/common.sh@353 -- # local d=1 00:04:15.385 11:32:40 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.385 11:32:40 env -- scripts/common.sh@355 -- # echo 1 00:04:15.385 11:32:40 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.385 11:32:40 env -- scripts/common.sh@366 -- # decimal 2 00:04:15.385 11:32:40 env -- scripts/common.sh@353 -- # local d=2 00:04:15.385 11:32:40 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.385 11:32:40 env -- scripts/common.sh@355 -- # echo 2 00:04:15.385 11:32:40 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.385 11:32:40 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.385 11:32:40 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.385 11:32:40 env -- scripts/common.sh@368 -- # return 0 00:04:15.385 11:32:40 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.385 11:32:40 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:15.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.385 --rc genhtml_branch_coverage=1 00:04:15.385 --rc genhtml_function_coverage=1 00:04:15.385 --rc genhtml_legend=1 00:04:15.385 --rc geninfo_all_blocks=1 00:04:15.385 --rc geninfo_unexecuted_blocks=1 00:04:15.385 00:04:15.385 ' 00:04:15.385 11:32:40 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:15.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.385 --rc genhtml_branch_coverage=1 00:04:15.385 --rc genhtml_function_coverage=1 00:04:15.385 --rc genhtml_legend=1 00:04:15.385 --rc geninfo_all_blocks=1 00:04:15.385 --rc geninfo_unexecuted_blocks=1 00:04:15.385 00:04:15.385 ' 00:04:15.385 11:32:40 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:15.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.385 --rc genhtml_branch_coverage=1 00:04:15.385 --rc genhtml_function_coverage=1 00:04:15.385 --rc genhtml_legend=1 00:04:15.385 --rc geninfo_all_blocks=1 00:04:15.385 --rc geninfo_unexecuted_blocks=1 00:04:15.385 00:04:15.385 ' 00:04:15.385 11:32:40 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:15.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.385 --rc genhtml_branch_coverage=1 00:04:15.385 --rc genhtml_function_coverage=1 00:04:15.385 --rc genhtml_legend=1 00:04:15.385 --rc geninfo_all_blocks=1 00:04:15.385 --rc geninfo_unexecuted_blocks=1 00:04:15.385 00:04:15.385 ' 00:04:15.385 11:32:40 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:15.385 11:32:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.385 11:32:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.385 11:32:40 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.385 ************************************ 00:04:15.385 START TEST env_memory 00:04:15.385 ************************************ 00:04:15.385 11:32:40 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:15.385 00:04:15.385 00:04:15.385 CUnit - A unit testing framework for C - Version 2.1-3 00:04:15.385 http://cunit.sourceforge.net/ 00:04:15.385 00:04:15.385 00:04:15.385 Suite: memory 00:04:15.385 Test: alloc and free memory map ...[2024-11-18 11:32:40.798279] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:15.385 passed 00:04:15.385 Test: mem map translation ...[2024-11-18 11:32:40.842434] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:15.385 [2024-11-18 11:32:40.842497] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:15.386 [2024-11-18 11:32:40.842597] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:15.386 [2024-11-18 11:32:40.842630] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:15.386 passed 00:04:15.386 Test: mem map registration ...[2024-11-18 11:32:40.913653] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:15.386 [2024-11-18 11:32:40.913702] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:15.386 passed 00:04:15.386 Test: mem map adjacent registrations ...passed 00:04:15.386 00:04:15.386 Run Summary: Type Total Ran Passed Failed Inactive 00:04:15.386 suites 1 1 n/a 0 0 00:04:15.386 tests 4 4 4 0 0 00:04:15.386 asserts 152 152 152 0 n/a 00:04:15.386 00:04:15.386 Elapsed time = 0.246 seconds 00:04:15.386 00:04:15.386 real 0m0.267s 00:04:15.386 user 0m0.251s 00:04:15.386 sys 0m0.015s 00:04:15.386 11:32:41 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.386 11:32:41 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:15.386 ************************************ 00:04:15.386 END TEST env_memory 00:04:15.386 ************************************ 00:04:15.386 11:32:41 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:15.386 11:32:41 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.386 11:32:41 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.386 11:32:41 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.386 ************************************ 00:04:15.386 START TEST env_vtophys 00:04:15.386 ************************************ 00:04:15.386 11:32:41 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:15.386 EAL: lib.eal log level changed from notice to debug 00:04:15.386 EAL: Detected lcore 0 as core 0 on socket 0 00:04:15.386 EAL: Detected lcore 1 as core 1 on socket 0 00:04:15.386 EAL: Detected lcore 2 as core 2 on socket 0 00:04:15.386 EAL: Detected lcore 3 as core 3 on socket 0 00:04:15.386 EAL: Detected lcore 4 as core 4 on socket 0 00:04:15.386 EAL: Detected lcore 5 as core 5 on socket 0 00:04:15.386 EAL: Detected lcore 6 as core 8 on socket 0 00:04:15.386 EAL: Detected lcore 7 as core 9 on socket 0 00:04:15.386 EAL: Detected lcore 8 as core 10 on socket 0 00:04:15.386 EAL: Detected lcore 9 as core 11 on socket 0 00:04:15.386 EAL: Detected lcore 10 as core 12 on socket 0 00:04:15.386 EAL: Detected lcore 11 as core 13 on socket 0 00:04:15.386 EAL: Detected lcore 12 as core 0 on socket 1 00:04:15.386 EAL: Detected lcore 13 as core 1 on socket 1 00:04:15.386 EAL: Detected lcore 14 as core 2 on socket 1 00:04:15.386 EAL: Detected lcore 15 as core 3 on socket 1 00:04:15.386 EAL: Detected lcore 16 as core 4 on socket 1 00:04:15.386 EAL: Detected lcore 17 as core 5 on socket 1 00:04:15.386 EAL: Detected lcore 18 as core 8 on socket 1 00:04:15.386 EAL: Detected lcore 19 as core 9 on socket 1 00:04:15.386 EAL: Detected lcore 20 as core 10 on socket 1 00:04:15.386 EAL: Detected lcore 21 as core 11 on socket 1 00:04:15.386 EAL: Detected lcore 22 as core 12 on socket 1 00:04:15.386 EAL: Detected lcore 23 as core 13 on socket 1 00:04:15.386 EAL: Detected lcore 24 as core 0 on socket 0 00:04:15.386 EAL: Detected lcore 25 as core 1 on socket 0 00:04:15.386 EAL: Detected lcore 26 as core 2 on socket 0 00:04:15.386 EAL: Detected lcore 27 as core 3 on socket 0 00:04:15.386 EAL: Detected lcore 28 as core 4 on socket 0 00:04:15.386 EAL: Detected lcore 29 as core 5 on socket 0 00:04:15.386 EAL: Detected lcore 30 as core 8 on socket 0 00:04:15.386 EAL: Detected lcore 31 as core 9 on socket 0 00:04:15.386 EAL: Detected lcore 32 as core 10 on socket 0 00:04:15.386 EAL: Detected lcore 33 as core 11 on socket 0 00:04:15.386 EAL: Detected lcore 34 as core 12 on socket 0 00:04:15.386 EAL: Detected lcore 35 as core 13 on socket 0 00:04:15.386 EAL: Detected lcore 36 as core 0 on socket 1 00:04:15.386 EAL: Detected lcore 37 as core 1 on socket 1 00:04:15.386 EAL: Detected lcore 38 as core 2 on socket 1 00:04:15.386 EAL: Detected lcore 39 as core 3 on socket 1 00:04:15.386 EAL: Detected lcore 40 as core 4 on socket 1 00:04:15.386 EAL: Detected lcore 41 as core 5 on socket 1 00:04:15.386 EAL: Detected lcore 42 as core 8 on socket 1 00:04:15.386 EAL: Detected lcore 43 as core 9 on socket 1 00:04:15.386 EAL: Detected lcore 44 as core 10 on socket 1 00:04:15.386 EAL: Detected lcore 45 as core 11 on socket 1 00:04:15.386 EAL: Detected lcore 46 as core 12 on socket 1 00:04:15.386 EAL: Detected lcore 47 as core 13 on socket 1 00:04:15.386 EAL: Maximum logical cores by configuration: 128 00:04:15.386 EAL: Detected CPU lcores: 48 00:04:15.386 EAL: Detected NUMA nodes: 2 00:04:15.386 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:15.386 EAL: Detected shared linkage of DPDK 00:04:15.386 EAL: No shared files mode enabled, IPC will be disabled 00:04:15.386 EAL: Bus pci wants IOVA as 'DC' 00:04:15.386 EAL: Buses did not request a specific IOVA mode. 00:04:15.386 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:15.386 EAL: Selected IOVA mode 'VA' 00:04:15.386 EAL: Probing VFIO support... 00:04:15.386 EAL: IOMMU type 1 (Type 1) is supported 00:04:15.386 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:15.386 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:15.386 EAL: VFIO support initialized 00:04:15.386 EAL: Ask a virtual area of 0x2e000 bytes 00:04:15.386 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:15.386 EAL: Setting up physically contiguous memory... 00:04:15.386 EAL: Setting maximum number of open files to 524288 00:04:15.386 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:15.386 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:15.386 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:15.386 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.386 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:15.386 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.386 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.386 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:15.386 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:15.386 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.386 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:15.386 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.386 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.386 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:15.386 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:15.386 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.386 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:15.386 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.386 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.386 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:15.386 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:15.386 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.386 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:15.386 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.386 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.386 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:15.386 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:15.386 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:15.386 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.386 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:15.386 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:15.386 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.386 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:15.386 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:15.386 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.386 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:15.386 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:15.386 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.386 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:15.386 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:15.386 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.386 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:15.386 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:15.386 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.386 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:15.386 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:15.386 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.386 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:15.386 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:15.386 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.386 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:15.386 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:15.386 EAL: Hugepages will be freed exactly as allocated. 00:04:15.386 EAL: No shared files mode enabled, IPC is disabled 00:04:15.386 EAL: No shared files mode enabled, IPC is disabled 00:04:15.386 EAL: TSC frequency is ~2700000 KHz 00:04:15.386 EAL: Main lcore 0 is ready (tid=7fb0a3a2ba40;cpuset=[0]) 00:04:15.386 EAL: Trying to obtain current memory policy. 00:04:15.386 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.386 EAL: Restoring previous memory policy: 0 00:04:15.386 EAL: request: mp_malloc_sync 00:04:15.386 EAL: No shared files mode enabled, IPC is disabled 00:04:15.386 EAL: Heap on socket 0 was expanded by 2MB 00:04:15.386 EAL: No shared files mode enabled, IPC is disabled 00:04:15.386 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:15.386 EAL: Mem event callback 'spdk:(nil)' registered 00:04:15.386 00:04:15.386 00:04:15.386 CUnit - A unit testing framework for C - Version 2.1-3 00:04:15.386 http://cunit.sourceforge.net/ 00:04:15.386 00:04:15.386 00:04:15.386 Suite: components_suite 00:04:15.953 Test: vtophys_malloc_test ...passed 00:04:15.953 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:15.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.953 EAL: Restoring previous memory policy: 4 00:04:15.953 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.953 EAL: request: mp_malloc_sync 00:04:15.953 EAL: No shared files mode enabled, IPC is disabled 00:04:15.953 EAL: Heap on socket 0 was expanded by 4MB 00:04:15.953 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.953 EAL: request: mp_malloc_sync 00:04:15.953 EAL: No shared files mode enabled, IPC is disabled 00:04:15.953 EAL: Heap on socket 0 was shrunk by 4MB 00:04:15.953 EAL: Trying to obtain current memory policy. 00:04:15.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.953 EAL: Restoring previous memory policy: 4 00:04:15.953 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.953 EAL: request: mp_malloc_sync 00:04:15.953 EAL: No shared files mode enabled, IPC is disabled 00:04:15.953 EAL: Heap on socket 0 was expanded by 6MB 00:04:15.953 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.953 EAL: request: mp_malloc_sync 00:04:15.953 EAL: No shared files mode enabled, IPC is disabled 00:04:15.953 EAL: Heap on socket 0 was shrunk by 6MB 00:04:15.953 EAL: Trying to obtain current memory policy. 00:04:15.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.953 EAL: Restoring previous memory policy: 4 00:04:15.953 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.953 EAL: request: mp_malloc_sync 00:04:15.953 EAL: No shared files mode enabled, IPC is disabled 00:04:15.953 EAL: Heap on socket 0 was expanded by 10MB 00:04:15.953 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.953 EAL: request: mp_malloc_sync 00:04:15.953 EAL: No shared files mode enabled, IPC is disabled 00:04:15.953 EAL: Heap on socket 0 was shrunk by 10MB 00:04:15.953 EAL: Trying to obtain current memory policy. 00:04:15.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.953 EAL: Restoring previous memory policy: 4 00:04:15.953 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.953 EAL: request: mp_malloc_sync 00:04:15.953 EAL: No shared files mode enabled, IPC is disabled 00:04:15.953 EAL: Heap on socket 0 was expanded by 18MB 00:04:15.953 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.953 EAL: request: mp_malloc_sync 00:04:15.953 EAL: No shared files mode enabled, IPC is disabled 00:04:15.953 EAL: Heap on socket 0 was shrunk by 18MB 00:04:15.953 EAL: Trying to obtain current memory policy. 00:04:15.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.953 EAL: Restoring previous memory policy: 4 00:04:15.953 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.953 EAL: request: mp_malloc_sync 00:04:15.953 EAL: No shared files mode enabled, IPC is disabled 00:04:15.953 EAL: Heap on socket 0 was expanded by 34MB 00:04:15.953 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.953 EAL: request: mp_malloc_sync 00:04:15.953 EAL: No shared files mode enabled, IPC is disabled 00:04:15.953 EAL: Heap on socket 0 was shrunk by 34MB 00:04:16.211 EAL: Trying to obtain current memory policy. 00:04:16.211 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.211 EAL: Restoring previous memory policy: 4 00:04:16.211 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.211 EAL: request: mp_malloc_sync 00:04:16.211 EAL: No shared files mode enabled, IPC is disabled 00:04:16.211 EAL: Heap on socket 0 was expanded by 66MB 00:04:16.211 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.211 EAL: request: mp_malloc_sync 00:04:16.211 EAL: No shared files mode enabled, IPC is disabled 00:04:16.211 EAL: Heap on socket 0 was shrunk by 66MB 00:04:16.469 EAL: Trying to obtain current memory policy. 00:04:16.469 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.469 EAL: Restoring previous memory policy: 4 00:04:16.469 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.469 EAL: request: mp_malloc_sync 00:04:16.469 EAL: No shared files mode enabled, IPC is disabled 00:04:16.469 EAL: Heap on socket 0 was expanded by 130MB 00:04:16.727 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.727 EAL: request: mp_malloc_sync 00:04:16.727 EAL: No shared files mode enabled, IPC is disabled 00:04:16.727 EAL: Heap on socket 0 was shrunk by 130MB 00:04:16.985 EAL: Trying to obtain current memory policy. 00:04:16.985 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.985 EAL: Restoring previous memory policy: 4 00:04:16.985 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.986 EAL: request: mp_malloc_sync 00:04:16.986 EAL: No shared files mode enabled, IPC is disabled 00:04:16.986 EAL: Heap on socket 0 was expanded by 258MB 00:04:17.552 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.552 EAL: request: mp_malloc_sync 00:04:17.552 EAL: No shared files mode enabled, IPC is disabled 00:04:17.552 EAL: Heap on socket 0 was shrunk by 258MB 00:04:17.810 EAL: Trying to obtain current memory policy. 00:04:17.810 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.068 EAL: Restoring previous memory policy: 4 00:04:18.068 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.068 EAL: request: mp_malloc_sync 00:04:18.068 EAL: No shared files mode enabled, IPC is disabled 00:04:18.068 EAL: Heap on socket 0 was expanded by 514MB 00:04:19.002 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.002 EAL: request: mp_malloc_sync 00:04:19.002 EAL: No shared files mode enabled, IPC is disabled 00:04:19.002 EAL: Heap on socket 0 was shrunk by 514MB 00:04:19.937 EAL: Trying to obtain current memory policy. 00:04:19.937 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.196 EAL: Restoring previous memory policy: 4 00:04:20.196 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.196 EAL: request: mp_malloc_sync 00:04:20.196 EAL: No shared files mode enabled, IPC is disabled 00:04:20.196 EAL: Heap on socket 0 was expanded by 1026MB 00:04:22.098 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.098 EAL: request: mp_malloc_sync 00:04:22.098 EAL: No shared files mode enabled, IPC is disabled 00:04:22.098 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:24.025 passed 00:04:24.025 00:04:24.025 Run Summary: Type Total Ran Passed Failed Inactive 00:04:24.025 suites 1 1 n/a 0 0 00:04:24.025 tests 2 2 2 0 0 00:04:24.025 asserts 497 497 497 0 n/a 00:04:24.025 00:04:24.025 Elapsed time = 8.226 seconds 00:04:24.025 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.025 EAL: request: mp_malloc_sync 00:04:24.025 EAL: No shared files mode enabled, IPC is disabled 00:04:24.025 EAL: Heap on socket 0 was shrunk by 2MB 00:04:24.025 EAL: No shared files mode enabled, IPC is disabled 00:04:24.025 EAL: No shared files mode enabled, IPC is disabled 00:04:24.025 EAL: No shared files mode enabled, IPC is disabled 00:04:24.025 00:04:24.025 real 0m8.506s 00:04:24.025 user 0m7.387s 00:04:24.025 sys 0m1.059s 00:04:24.025 11:32:49 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.025 11:32:49 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:24.025 ************************************ 00:04:24.025 END TEST env_vtophys 00:04:24.025 ************************************ 00:04:24.025 11:32:49 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:24.025 11:32:49 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.025 11:32:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.025 11:32:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:24.025 ************************************ 00:04:24.025 START TEST env_pci 00:04:24.025 ************************************ 00:04:24.025 11:32:49 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:24.025 00:04:24.025 00:04:24.025 CUnit - A unit testing framework for C - Version 2.1-3 00:04:24.025 http://cunit.sourceforge.net/ 00:04:24.025 00:04:24.025 00:04:24.025 Suite: pci 00:04:24.025 Test: pci_hook ...[2024-11-18 11:32:49.647418] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2814578 has claimed it 00:04:24.025 EAL: Cannot find device (10000:00:01.0) 00:04:24.025 EAL: Failed to attach device on primary process 00:04:24.025 passed 00:04:24.025 00:04:24.025 Run Summary: Type Total Ran Passed Failed Inactive 00:04:24.025 suites 1 1 n/a 0 0 00:04:24.025 tests 1 1 1 0 0 00:04:24.025 asserts 25 25 25 0 n/a 00:04:24.025 00:04:24.025 Elapsed time = 0.051 seconds 00:04:24.025 00:04:24.025 real 0m0.104s 00:04:24.025 user 0m0.045s 00:04:24.025 sys 0m0.058s 00:04:24.025 11:32:49 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.025 11:32:49 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:24.025 ************************************ 00:04:24.025 END TEST env_pci 00:04:24.025 ************************************ 00:04:24.025 11:32:49 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:24.025 11:32:49 env -- env/env.sh@15 -- # uname 00:04:24.025 11:32:49 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:24.025 11:32:49 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:24.025 11:32:49 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:24.025 11:32:49 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:24.025 11:32:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.025 11:32:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:24.025 ************************************ 00:04:24.026 START TEST env_dpdk_post_init 00:04:24.026 ************************************ 00:04:24.026 11:32:49 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:24.026 EAL: Detected CPU lcores: 48 00:04:24.026 EAL: Detected NUMA nodes: 2 00:04:24.026 EAL: Detected shared linkage of DPDK 00:04:24.026 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:24.026 EAL: Selected IOVA mode 'VA' 00:04:24.026 EAL: VFIO support initialized 00:04:24.026 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:24.285 EAL: Using IOMMU type 1 (Type 1) 00:04:24.285 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:24.285 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:24.285 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:24.285 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:24.285 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:24.285 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:24.285 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:24.285 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:24.285 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:24.285 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:24.285 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:24.285 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:24.544 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:24.544 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:24.544 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:24.544 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:25.111 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:04:28.391 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:04:28.391 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:04:28.650 Starting DPDK initialization... 00:04:28.650 Starting SPDK post initialization... 00:04:28.650 SPDK NVMe probe 00:04:28.650 Attaching to 0000:88:00.0 00:04:28.650 Attached to 0000:88:00.0 00:04:28.650 Cleaning up... 00:04:28.650 00:04:28.650 real 0m4.575s 00:04:28.650 user 0m3.132s 00:04:28.650 sys 0m0.496s 00:04:28.650 11:32:54 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.650 11:32:54 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:28.650 ************************************ 00:04:28.650 END TEST env_dpdk_post_init 00:04:28.650 ************************************ 00:04:28.650 11:32:54 env -- env/env.sh@26 -- # uname 00:04:28.650 11:32:54 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:28.650 11:32:54 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:28.650 11:32:54 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.650 11:32:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.650 11:32:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.650 ************************************ 00:04:28.650 START TEST env_mem_callbacks 00:04:28.650 ************************************ 00:04:28.650 11:32:54 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:28.650 EAL: Detected CPU lcores: 48 00:04:28.650 EAL: Detected NUMA nodes: 2 00:04:28.650 EAL: Detected shared linkage of DPDK 00:04:28.650 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:28.650 EAL: Selected IOVA mode 'VA' 00:04:28.650 EAL: VFIO support initialized 00:04:28.650 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:28.650 00:04:28.650 00:04:28.650 CUnit - A unit testing framework for C - Version 2.1-3 00:04:28.650 http://cunit.sourceforge.net/ 00:04:28.650 00:04:28.650 00:04:28.650 Suite: memory 00:04:28.650 Test: test ... 00:04:28.650 register 0x200000200000 2097152 00:04:28.650 malloc 3145728 00:04:28.650 register 0x200000400000 4194304 00:04:28.650 buf 0x2000004fffc0 len 3145728 PASSED 00:04:28.650 malloc 64 00:04:28.650 buf 0x2000004ffec0 len 64 PASSED 00:04:28.650 malloc 4194304 00:04:28.650 register 0x200000800000 6291456 00:04:28.650 buf 0x2000009fffc0 len 4194304 PASSED 00:04:28.650 free 0x2000004fffc0 3145728 00:04:28.650 free 0x2000004ffec0 64 00:04:28.650 unregister 0x200000400000 4194304 PASSED 00:04:28.650 free 0x2000009fffc0 4194304 00:04:28.909 unregister 0x200000800000 6291456 PASSED 00:04:28.909 malloc 8388608 00:04:28.909 register 0x200000400000 10485760 00:04:28.909 buf 0x2000005fffc0 len 8388608 PASSED 00:04:28.909 free 0x2000005fffc0 8388608 00:04:28.909 unregister 0x200000400000 10485760 PASSED 00:04:28.909 passed 00:04:28.909 00:04:28.909 Run Summary: Type Total Ran Passed Failed Inactive 00:04:28.909 suites 1 1 n/a 0 0 00:04:28.909 tests 1 1 1 0 0 00:04:28.909 asserts 15 15 15 0 n/a 00:04:28.909 00:04:28.909 Elapsed time = 0.060 seconds 00:04:28.909 00:04:28.909 real 0m0.192s 00:04:28.909 user 0m0.110s 00:04:28.909 sys 0m0.081s 00:04:28.909 11:32:54 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.909 11:32:54 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:28.909 ************************************ 00:04:28.909 END TEST env_mem_callbacks 00:04:28.909 ************************************ 00:04:28.909 00:04:28.909 real 0m14.039s 00:04:28.909 user 0m11.122s 00:04:28.909 sys 0m1.928s 00:04:28.909 11:32:54 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.909 11:32:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.909 ************************************ 00:04:28.909 END TEST env 00:04:28.909 ************************************ 00:04:28.909 11:32:54 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:28.909 11:32:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.909 11:32:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.909 11:32:54 -- common/autotest_common.sh@10 -- # set +x 00:04:28.909 ************************************ 00:04:28.909 START TEST rpc 00:04:28.909 ************************************ 00:04:28.909 11:32:54 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:28.909 * Looking for test storage... 00:04:28.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:28.909 11:32:54 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:28.909 11:32:54 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:28.909 11:32:54 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:28.909 11:32:54 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:28.909 11:32:54 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.909 11:32:54 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.167 11:32:54 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.167 11:32:54 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.167 11:32:54 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.167 11:32:54 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.167 11:32:54 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.167 11:32:54 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.168 11:32:54 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.168 11:32:54 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.168 11:32:54 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.168 11:32:54 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:29.168 11:32:54 rpc -- scripts/common.sh@345 -- # : 1 00:04:29.168 11:32:54 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.168 11:32:54 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.168 11:32:54 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:29.168 11:32:54 rpc -- scripts/common.sh@353 -- # local d=1 00:04:29.168 11:32:54 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.168 11:32:54 rpc -- scripts/common.sh@355 -- # echo 1 00:04:29.168 11:32:54 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.168 11:32:54 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:29.168 11:32:54 rpc -- scripts/common.sh@353 -- # local d=2 00:04:29.168 11:32:54 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.168 11:32:54 rpc -- scripts/common.sh@355 -- # echo 2 00:04:29.168 11:32:54 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.168 11:32:54 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.168 11:32:54 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.168 11:32:54 rpc -- scripts/common.sh@368 -- # return 0 00:04:29.168 11:32:54 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.168 11:32:54 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:29.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.168 --rc genhtml_branch_coverage=1 00:04:29.168 --rc genhtml_function_coverage=1 00:04:29.168 --rc genhtml_legend=1 00:04:29.168 --rc geninfo_all_blocks=1 00:04:29.168 --rc geninfo_unexecuted_blocks=1 00:04:29.168 00:04:29.168 ' 00:04:29.168 11:32:54 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:29.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.168 --rc genhtml_branch_coverage=1 00:04:29.168 --rc genhtml_function_coverage=1 00:04:29.168 --rc genhtml_legend=1 00:04:29.168 --rc geninfo_all_blocks=1 00:04:29.168 --rc geninfo_unexecuted_blocks=1 00:04:29.168 00:04:29.168 ' 00:04:29.168 11:32:54 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:29.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.168 --rc genhtml_branch_coverage=1 00:04:29.168 --rc genhtml_function_coverage=1 00:04:29.168 --rc genhtml_legend=1 00:04:29.168 --rc geninfo_all_blocks=1 00:04:29.168 --rc geninfo_unexecuted_blocks=1 00:04:29.168 00:04:29.168 ' 00:04:29.168 11:32:54 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:29.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.168 --rc genhtml_branch_coverage=1 00:04:29.168 --rc genhtml_function_coverage=1 00:04:29.168 --rc genhtml_legend=1 00:04:29.168 --rc geninfo_all_blocks=1 00:04:29.168 --rc geninfo_unexecuted_blocks=1 00:04:29.168 00:04:29.168 ' 00:04:29.168 11:32:54 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2815369 00:04:29.168 11:32:54 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:29.168 11:32:54 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.168 11:32:54 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2815369 00:04:29.168 11:32:54 rpc -- common/autotest_common.sh@835 -- # '[' -z 2815369 ']' 00:04:29.168 11:32:54 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.168 11:32:54 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.168 11:32:54 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.168 11:32:54 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.168 11:32:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.168 [2024-11-18 11:32:54.907445] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:29.168 [2024-11-18 11:32:54.907641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2815369 ] 00:04:29.168 [2024-11-18 11:32:55.043034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.427 [2024-11-18 11:32:55.173556] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:29.427 [2024-11-18 11:32:55.173652] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2815369' to capture a snapshot of events at runtime. 00:04:29.427 [2024-11-18 11:32:55.173680] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:29.427 [2024-11-18 11:32:55.173701] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:29.427 [2024-11-18 11:32:55.173741] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2815369 for offline analysis/debug. 00:04:29.427 [2024-11-18 11:32:55.175262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.360 11:32:56 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.360 11:32:56 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:30.360 11:32:56 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:30.361 11:32:56 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:30.361 11:32:56 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:30.361 11:32:56 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:30.361 11:32:56 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.361 11:32:56 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.361 11:32:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.361 ************************************ 00:04:30.361 START TEST rpc_integrity 00:04:30.361 ************************************ 00:04:30.361 11:32:56 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:30.361 11:32:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:30.361 11:32:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.361 11:32:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.361 11:32:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.361 11:32:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:30.361 11:32:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:30.361 11:32:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:30.361 11:32:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:30.361 11:32:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.361 11:32:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.361 11:32:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.361 11:32:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:30.361 11:32:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:30.361 11:32:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.361 11:32:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.361 11:32:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.361 11:32:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:30.361 { 00:04:30.361 "name": "Malloc0", 00:04:30.361 "aliases": [ 00:04:30.361 "940f0ab5-3c5e-473b-bb4a-c01f93b9a3c9" 00:04:30.361 ], 00:04:30.361 "product_name": "Malloc disk", 00:04:30.361 "block_size": 512, 00:04:30.361 "num_blocks": 16384, 00:04:30.361 "uuid": "940f0ab5-3c5e-473b-bb4a-c01f93b9a3c9", 00:04:30.361 "assigned_rate_limits": { 00:04:30.361 "rw_ios_per_sec": 0, 00:04:30.361 "rw_mbytes_per_sec": 0, 00:04:30.361 "r_mbytes_per_sec": 0, 00:04:30.361 "w_mbytes_per_sec": 0 00:04:30.361 }, 00:04:30.361 "claimed": false, 00:04:30.361 "zoned": false, 00:04:30.361 "supported_io_types": { 00:04:30.361 "read": true, 00:04:30.361 "write": true, 00:04:30.361 "unmap": true, 00:04:30.361 "flush": true, 00:04:30.361 "reset": true, 00:04:30.361 "nvme_admin": false, 00:04:30.361 "nvme_io": false, 00:04:30.361 "nvme_io_md": false, 00:04:30.361 "write_zeroes": true, 00:04:30.361 "zcopy": true, 00:04:30.361 "get_zone_info": false, 00:04:30.361 "zone_management": false, 00:04:30.361 "zone_append": false, 00:04:30.361 "compare": false, 00:04:30.361 "compare_and_write": false, 00:04:30.361 "abort": true, 00:04:30.361 "seek_hole": false, 00:04:30.361 "seek_data": false, 00:04:30.361 "copy": true, 00:04:30.361 "nvme_iov_md": false 00:04:30.361 }, 00:04:30.361 "memory_domains": [ 00:04:30.361 { 00:04:30.361 "dma_device_id": "system", 00:04:30.361 "dma_device_type": 1 00:04:30.361 }, 00:04:30.361 { 00:04:30.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.361 "dma_device_type": 2 00:04:30.361 } 00:04:30.361 ], 00:04:30.361 "driver_specific": {} 00:04:30.361 } 00:04:30.361 ]' 00:04:30.361 11:32:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:30.619 11:32:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:30.619 11:32:56 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:30.619 11:32:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.619 11:32:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.619 [2024-11-18 11:32:56.279500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:30.619 [2024-11-18 11:32:56.279584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:30.619 [2024-11-18 11:32:56.279629] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022880 00:04:30.619 [2024-11-18 11:32:56.279652] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:30.619 [2024-11-18 11:32:56.282366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:30.619 [2024-11-18 11:32:56.282403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:30.619 Passthru0 00:04:30.619 11:32:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.619 11:32:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:30.619 11:32:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.619 11:32:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.619 11:32:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.619 11:32:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:30.619 { 00:04:30.619 "name": "Malloc0", 00:04:30.619 "aliases": [ 00:04:30.619 "940f0ab5-3c5e-473b-bb4a-c01f93b9a3c9" 00:04:30.619 ], 00:04:30.619 "product_name": "Malloc disk", 00:04:30.619 "block_size": 512, 00:04:30.619 "num_blocks": 16384, 00:04:30.619 "uuid": "940f0ab5-3c5e-473b-bb4a-c01f93b9a3c9", 00:04:30.619 "assigned_rate_limits": { 00:04:30.619 "rw_ios_per_sec": 0, 00:04:30.619 "rw_mbytes_per_sec": 0, 00:04:30.619 "r_mbytes_per_sec": 0, 00:04:30.619 "w_mbytes_per_sec": 0 00:04:30.619 }, 00:04:30.619 "claimed": true, 00:04:30.619 "claim_type": "exclusive_write", 00:04:30.619 "zoned": false, 00:04:30.619 "supported_io_types": { 00:04:30.619 "read": true, 00:04:30.619 "write": true, 00:04:30.619 "unmap": true, 00:04:30.619 "flush": true, 00:04:30.619 "reset": true, 00:04:30.619 "nvme_admin": false, 00:04:30.619 "nvme_io": false, 00:04:30.619 "nvme_io_md": false, 00:04:30.619 "write_zeroes": true, 00:04:30.619 "zcopy": true, 00:04:30.619 "get_zone_info": false, 00:04:30.619 "zone_management": false, 00:04:30.619 "zone_append": false, 00:04:30.619 "compare": false, 00:04:30.619 "compare_and_write": false, 00:04:30.619 "abort": true, 00:04:30.619 "seek_hole": false, 00:04:30.620 "seek_data": false, 00:04:30.620 "copy": true, 00:04:30.620 "nvme_iov_md": false 00:04:30.620 }, 00:04:30.620 "memory_domains": [ 00:04:30.620 { 00:04:30.620 "dma_device_id": "system", 00:04:30.620 "dma_device_type": 1 00:04:30.620 }, 00:04:30.620 { 00:04:30.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.620 "dma_device_type": 2 00:04:30.620 } 00:04:30.620 ], 00:04:30.620 "driver_specific": {} 00:04:30.620 }, 00:04:30.620 { 00:04:30.620 "name": "Passthru0", 00:04:30.620 "aliases": [ 00:04:30.620 "4020fb0d-d5a4-553b-bbb1-e4ccee0017f0" 00:04:30.620 ], 00:04:30.620 "product_name": "passthru", 00:04:30.620 "block_size": 512, 00:04:30.620 "num_blocks": 16384, 00:04:30.620 "uuid": "4020fb0d-d5a4-553b-bbb1-e4ccee0017f0", 00:04:30.620 "assigned_rate_limits": { 00:04:30.620 "rw_ios_per_sec": 0, 00:04:30.620 "rw_mbytes_per_sec": 0, 00:04:30.620 "r_mbytes_per_sec": 0, 00:04:30.620 "w_mbytes_per_sec": 0 00:04:30.620 }, 00:04:30.620 "claimed": false, 00:04:30.620 "zoned": false, 00:04:30.620 "supported_io_types": { 00:04:30.620 "read": true, 00:04:30.620 "write": true, 00:04:30.620 "unmap": true, 00:04:30.620 "flush": true, 00:04:30.620 "reset": true, 00:04:30.620 "nvme_admin": false, 00:04:30.620 "nvme_io": false, 00:04:30.620 "nvme_io_md": false, 00:04:30.620 "write_zeroes": true, 00:04:30.620 "zcopy": true, 00:04:30.620 "get_zone_info": false, 00:04:30.620 "zone_management": false, 00:04:30.620 "zone_append": false, 00:04:30.620 "compare": false, 00:04:30.620 "compare_and_write": false, 00:04:30.620 "abort": true, 00:04:30.620 "seek_hole": false, 00:04:30.620 "seek_data": false, 00:04:30.620 "copy": true, 00:04:30.620 "nvme_iov_md": false 00:04:30.620 }, 00:04:30.620 "memory_domains": [ 00:04:30.620 { 00:04:30.620 "dma_device_id": "system", 00:04:30.620 "dma_device_type": 1 00:04:30.620 }, 00:04:30.620 { 00:04:30.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.620 "dma_device_type": 2 00:04:30.620 } 00:04:30.620 ], 00:04:30.620 "driver_specific": { 00:04:30.620 "passthru": { 00:04:30.620 "name": "Passthru0", 00:04:30.620 "base_bdev_name": "Malloc0" 00:04:30.620 } 00:04:30.620 } 00:04:30.620 } 00:04:30.620 ]' 00:04:30.620 11:32:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:30.620 11:32:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:30.620 11:32:56 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:30.620 11:32:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.620 11:32:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.620 11:32:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.620 11:32:56 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:30.620 11:32:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.620 11:32:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.620 11:32:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.620 11:32:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:30.620 11:32:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.620 11:32:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.620 11:32:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.620 11:32:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:30.620 11:32:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:30.620 11:32:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:30.620 00:04:30.620 real 0m0.259s 00:04:30.620 user 0m0.152s 00:04:30.620 sys 0m0.021s 00:04:30.620 11:32:56 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.620 11:32:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.620 ************************************ 00:04:30.620 END TEST rpc_integrity 00:04:30.620 ************************************ 00:04:30.620 11:32:56 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:30.620 11:32:56 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.620 11:32:56 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.620 11:32:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.620 ************************************ 00:04:30.620 START TEST rpc_plugins 00:04:30.620 ************************************ 00:04:30.620 11:32:56 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:30.620 11:32:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:30.620 11:32:56 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.620 11:32:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:30.620 11:32:56 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.620 11:32:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:30.620 11:32:56 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:30.620 11:32:56 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.620 11:32:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:30.620 11:32:56 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.620 11:32:56 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:30.620 { 00:04:30.620 "name": "Malloc1", 00:04:30.620 "aliases": [ 00:04:30.620 "a5fa4727-68b0-4fff-a2e3-13e7e029c203" 00:04:30.620 ], 00:04:30.620 "product_name": "Malloc disk", 00:04:30.620 "block_size": 4096, 00:04:30.620 "num_blocks": 256, 00:04:30.620 "uuid": "a5fa4727-68b0-4fff-a2e3-13e7e029c203", 00:04:30.620 "assigned_rate_limits": { 00:04:30.620 "rw_ios_per_sec": 0, 00:04:30.620 "rw_mbytes_per_sec": 0, 00:04:30.620 "r_mbytes_per_sec": 0, 00:04:30.620 "w_mbytes_per_sec": 0 00:04:30.620 }, 00:04:30.620 "claimed": false, 00:04:30.620 "zoned": false, 00:04:30.620 "supported_io_types": { 00:04:30.620 "read": true, 00:04:30.620 "write": true, 00:04:30.620 "unmap": true, 00:04:30.620 "flush": true, 00:04:30.620 "reset": true, 00:04:30.620 "nvme_admin": false, 00:04:30.620 "nvme_io": false, 00:04:30.620 "nvme_io_md": false, 00:04:30.620 "write_zeroes": true, 00:04:30.620 "zcopy": true, 00:04:30.620 "get_zone_info": false, 00:04:30.620 "zone_management": false, 00:04:30.620 "zone_append": false, 00:04:30.620 "compare": false, 00:04:30.620 "compare_and_write": false, 00:04:30.620 "abort": true, 00:04:30.620 "seek_hole": false, 00:04:30.620 "seek_data": false, 00:04:30.620 "copy": true, 00:04:30.620 "nvme_iov_md": false 00:04:30.620 }, 00:04:30.620 "memory_domains": [ 00:04:30.620 { 00:04:30.620 "dma_device_id": "system", 00:04:30.620 "dma_device_type": 1 00:04:30.620 }, 00:04:30.620 { 00:04:30.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.620 "dma_device_type": 2 00:04:30.620 } 00:04:30.620 ], 00:04:30.620 "driver_specific": {} 00:04:30.620 } 00:04:30.620 ]' 00:04:30.620 11:32:56 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:30.878 11:32:56 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:30.878 11:32:56 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:30.878 11:32:56 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.878 11:32:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:30.878 11:32:56 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.878 11:32:56 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:30.878 11:32:56 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.878 11:32:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:30.878 11:32:56 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.878 11:32:56 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:30.878 11:32:56 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:30.878 11:32:56 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:30.878 00:04:30.878 real 0m0.119s 00:04:30.878 user 0m0.075s 00:04:30.878 sys 0m0.010s 00:04:30.878 11:32:56 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.878 11:32:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:30.878 ************************************ 00:04:30.878 END TEST rpc_plugins 00:04:30.878 ************************************ 00:04:30.878 11:32:56 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:30.878 11:32:56 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.878 11:32:56 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.878 11:32:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.878 ************************************ 00:04:30.878 START TEST rpc_trace_cmd_test 00:04:30.878 ************************************ 00:04:30.878 11:32:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:30.878 11:32:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:30.878 11:32:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:30.878 11:32:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.878 11:32:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:30.878 11:32:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.878 11:32:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:30.878 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2815369", 00:04:30.878 "tpoint_group_mask": "0x8", 00:04:30.878 "iscsi_conn": { 00:04:30.878 "mask": "0x2", 00:04:30.878 "tpoint_mask": "0x0" 00:04:30.878 }, 00:04:30.878 "scsi": { 00:04:30.878 "mask": "0x4", 00:04:30.878 "tpoint_mask": "0x0" 00:04:30.878 }, 00:04:30.878 "bdev": { 00:04:30.878 "mask": "0x8", 00:04:30.878 "tpoint_mask": "0xffffffffffffffff" 00:04:30.878 }, 00:04:30.878 "nvmf_rdma": { 00:04:30.878 "mask": "0x10", 00:04:30.878 "tpoint_mask": "0x0" 00:04:30.878 }, 00:04:30.878 "nvmf_tcp": { 00:04:30.878 "mask": "0x20", 00:04:30.878 "tpoint_mask": "0x0" 00:04:30.878 }, 00:04:30.878 "ftl": { 00:04:30.878 "mask": "0x40", 00:04:30.878 "tpoint_mask": "0x0" 00:04:30.878 }, 00:04:30.878 "blobfs": { 00:04:30.878 "mask": "0x80", 00:04:30.878 "tpoint_mask": "0x0" 00:04:30.878 }, 00:04:30.878 "dsa": { 00:04:30.878 "mask": "0x200", 00:04:30.878 "tpoint_mask": "0x0" 00:04:30.878 }, 00:04:30.878 "thread": { 00:04:30.878 "mask": "0x400", 00:04:30.878 "tpoint_mask": "0x0" 00:04:30.878 }, 00:04:30.878 "nvme_pcie": { 00:04:30.878 "mask": "0x800", 00:04:30.878 "tpoint_mask": "0x0" 00:04:30.878 }, 00:04:30.878 "iaa": { 00:04:30.878 "mask": "0x1000", 00:04:30.878 "tpoint_mask": "0x0" 00:04:30.878 }, 00:04:30.878 "nvme_tcp": { 00:04:30.878 "mask": "0x2000", 00:04:30.878 "tpoint_mask": "0x0" 00:04:30.878 }, 00:04:30.878 "bdev_nvme": { 00:04:30.878 "mask": "0x4000", 00:04:30.878 "tpoint_mask": "0x0" 00:04:30.878 }, 00:04:30.878 "sock": { 00:04:30.878 "mask": "0x8000", 00:04:30.878 "tpoint_mask": "0x0" 00:04:30.878 }, 00:04:30.878 "blob": { 00:04:30.878 "mask": "0x10000", 00:04:30.878 "tpoint_mask": "0x0" 00:04:30.878 }, 00:04:30.878 "bdev_raid": { 00:04:30.878 "mask": "0x20000", 00:04:30.878 "tpoint_mask": "0x0" 00:04:30.878 }, 00:04:30.878 "scheduler": { 00:04:30.878 "mask": "0x40000", 00:04:30.878 "tpoint_mask": "0x0" 00:04:30.878 } 00:04:30.878 }' 00:04:30.878 11:32:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:30.878 11:32:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:30.878 11:32:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:30.878 11:32:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:30.878 11:32:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:30.878 11:32:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:30.878 11:32:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:31.137 11:32:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:31.137 11:32:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:31.137 11:32:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:31.137 00:04:31.137 real 0m0.209s 00:04:31.137 user 0m0.187s 00:04:31.137 sys 0m0.015s 00:04:31.137 11:32:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.137 11:32:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:31.137 ************************************ 00:04:31.137 END TEST rpc_trace_cmd_test 00:04:31.137 ************************************ 00:04:31.137 11:32:56 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:31.137 11:32:56 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:31.137 11:32:56 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:31.137 11:32:56 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.137 11:32:56 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.137 11:32:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.137 ************************************ 00:04:31.137 START TEST rpc_daemon_integrity 00:04:31.137 ************************************ 00:04:31.137 11:32:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:31.137 11:32:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:31.137 11:32:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.137 11:32:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.137 11:32:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.137 11:32:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:31.137 11:32:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:31.137 11:32:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:31.137 11:32:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:31.137 11:32:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.137 11:32:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.137 11:32:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.137 11:32:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:31.137 11:32:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:31.137 11:32:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.137 11:32:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.137 11:32:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.137 11:32:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:31.137 { 00:04:31.137 "name": "Malloc2", 00:04:31.137 "aliases": [ 00:04:31.137 "5a961cf7-194e-407b-9a49-d20d638d954a" 00:04:31.137 ], 00:04:31.137 "product_name": "Malloc disk", 00:04:31.137 "block_size": 512, 00:04:31.137 "num_blocks": 16384, 00:04:31.137 "uuid": "5a961cf7-194e-407b-9a49-d20d638d954a", 00:04:31.137 "assigned_rate_limits": { 00:04:31.137 "rw_ios_per_sec": 0, 00:04:31.137 "rw_mbytes_per_sec": 0, 00:04:31.137 "r_mbytes_per_sec": 0, 00:04:31.137 "w_mbytes_per_sec": 0 00:04:31.137 }, 00:04:31.137 "claimed": false, 00:04:31.137 "zoned": false, 00:04:31.137 "supported_io_types": { 00:04:31.137 "read": true, 00:04:31.137 "write": true, 00:04:31.137 "unmap": true, 00:04:31.137 "flush": true, 00:04:31.137 "reset": true, 00:04:31.137 "nvme_admin": false, 00:04:31.137 "nvme_io": false, 00:04:31.137 "nvme_io_md": false, 00:04:31.137 "write_zeroes": true, 00:04:31.137 "zcopy": true, 00:04:31.137 "get_zone_info": false, 00:04:31.137 "zone_management": false, 00:04:31.137 "zone_append": false, 00:04:31.137 "compare": false, 00:04:31.137 "compare_and_write": false, 00:04:31.137 "abort": true, 00:04:31.137 "seek_hole": false, 00:04:31.137 "seek_data": false, 00:04:31.137 "copy": true, 00:04:31.137 "nvme_iov_md": false 00:04:31.137 }, 00:04:31.137 "memory_domains": [ 00:04:31.137 { 00:04:31.137 "dma_device_id": "system", 00:04:31.137 "dma_device_type": 1 00:04:31.137 }, 00:04:31.137 { 00:04:31.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.137 "dma_device_type": 2 00:04:31.137 } 00:04:31.137 ], 00:04:31.137 "driver_specific": {} 00:04:31.137 } 00:04:31.137 ]' 00:04:31.137 11:32:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:31.137 11:32:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:31.137 11:32:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:31.137 11:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.137 11:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.137 [2024-11-18 11:32:57.009433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:31.137 [2024-11-18 11:32:57.009505] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:31.137 [2024-11-18 11:32:57.009568] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000023a80 00:04:31.137 [2024-11-18 11:32:57.009591] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:31.137 [2024-11-18 11:32:57.012352] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:31.137 [2024-11-18 11:32:57.012388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:31.137 Passthru0 00:04:31.137 11:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.137 11:32:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:31.137 11:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.137 11:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.395 11:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.395 11:32:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:31.395 { 00:04:31.395 "name": "Malloc2", 00:04:31.395 "aliases": [ 00:04:31.395 "5a961cf7-194e-407b-9a49-d20d638d954a" 00:04:31.395 ], 00:04:31.395 "product_name": "Malloc disk", 00:04:31.395 "block_size": 512, 00:04:31.395 "num_blocks": 16384, 00:04:31.395 "uuid": "5a961cf7-194e-407b-9a49-d20d638d954a", 00:04:31.395 "assigned_rate_limits": { 00:04:31.395 "rw_ios_per_sec": 0, 00:04:31.395 "rw_mbytes_per_sec": 0, 00:04:31.395 "r_mbytes_per_sec": 0, 00:04:31.395 "w_mbytes_per_sec": 0 00:04:31.395 }, 00:04:31.395 "claimed": true, 00:04:31.395 "claim_type": "exclusive_write", 00:04:31.395 "zoned": false, 00:04:31.395 "supported_io_types": { 00:04:31.395 "read": true, 00:04:31.395 "write": true, 00:04:31.395 "unmap": true, 00:04:31.395 "flush": true, 00:04:31.395 "reset": true, 00:04:31.395 "nvme_admin": false, 00:04:31.395 "nvme_io": false, 00:04:31.395 "nvme_io_md": false, 00:04:31.395 "write_zeroes": true, 00:04:31.395 "zcopy": true, 00:04:31.395 "get_zone_info": false, 00:04:31.395 "zone_management": false, 00:04:31.395 "zone_append": false, 00:04:31.395 "compare": false, 00:04:31.395 "compare_and_write": false, 00:04:31.395 "abort": true, 00:04:31.395 "seek_hole": false, 00:04:31.395 "seek_data": false, 00:04:31.395 "copy": true, 00:04:31.395 "nvme_iov_md": false 00:04:31.395 }, 00:04:31.395 "memory_domains": [ 00:04:31.395 { 00:04:31.395 "dma_device_id": "system", 00:04:31.395 "dma_device_type": 1 00:04:31.395 }, 00:04:31.396 { 00:04:31.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.396 "dma_device_type": 2 00:04:31.396 } 00:04:31.396 ], 00:04:31.396 "driver_specific": {} 00:04:31.396 }, 00:04:31.396 { 00:04:31.396 "name": "Passthru0", 00:04:31.396 "aliases": [ 00:04:31.396 "eee479ae-b1b0-5f99-847e-12e503658924" 00:04:31.396 ], 00:04:31.396 "product_name": "passthru", 00:04:31.396 "block_size": 512, 00:04:31.396 "num_blocks": 16384, 00:04:31.396 "uuid": "eee479ae-b1b0-5f99-847e-12e503658924", 00:04:31.396 "assigned_rate_limits": { 00:04:31.396 "rw_ios_per_sec": 0, 00:04:31.396 "rw_mbytes_per_sec": 0, 00:04:31.396 "r_mbytes_per_sec": 0, 00:04:31.396 "w_mbytes_per_sec": 0 00:04:31.396 }, 00:04:31.396 "claimed": false, 00:04:31.396 "zoned": false, 00:04:31.396 "supported_io_types": { 00:04:31.396 "read": true, 00:04:31.396 "write": true, 00:04:31.396 "unmap": true, 00:04:31.396 "flush": true, 00:04:31.396 "reset": true, 00:04:31.396 "nvme_admin": false, 00:04:31.396 "nvme_io": false, 00:04:31.396 "nvme_io_md": false, 00:04:31.396 "write_zeroes": true, 00:04:31.396 "zcopy": true, 00:04:31.396 "get_zone_info": false, 00:04:31.396 "zone_management": false, 00:04:31.396 "zone_append": false, 00:04:31.396 "compare": false, 00:04:31.396 "compare_and_write": false, 00:04:31.396 "abort": true, 00:04:31.396 "seek_hole": false, 00:04:31.396 "seek_data": false, 00:04:31.396 "copy": true, 00:04:31.396 "nvme_iov_md": false 00:04:31.396 }, 00:04:31.396 "memory_domains": [ 00:04:31.396 { 00:04:31.396 "dma_device_id": "system", 00:04:31.396 "dma_device_type": 1 00:04:31.396 }, 00:04:31.396 { 00:04:31.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.396 "dma_device_type": 2 00:04:31.396 } 00:04:31.396 ], 00:04:31.396 "driver_specific": { 00:04:31.396 "passthru": { 00:04:31.396 "name": "Passthru0", 00:04:31.396 "base_bdev_name": "Malloc2" 00:04:31.396 } 00:04:31.396 } 00:04:31.396 } 00:04:31.396 ]' 00:04:31.396 11:32:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:31.396 11:32:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:31.396 11:32:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:31.396 11:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.396 11:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.396 11:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.396 11:32:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:31.396 11:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.396 11:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.396 11:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.396 11:32:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:31.396 11:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.396 11:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.396 11:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.396 11:32:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:31.396 11:32:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:31.396 11:32:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:31.396 00:04:31.396 real 0m0.258s 00:04:31.396 user 0m0.142s 00:04:31.396 sys 0m0.032s 00:04:31.396 11:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.396 11:32:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.396 ************************************ 00:04:31.396 END TEST rpc_daemon_integrity 00:04:31.396 ************************************ 00:04:31.396 11:32:57 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:31.396 11:32:57 rpc -- rpc/rpc.sh@84 -- # killprocess 2815369 00:04:31.396 11:32:57 rpc -- common/autotest_common.sh@954 -- # '[' -z 2815369 ']' 00:04:31.396 11:32:57 rpc -- common/autotest_common.sh@958 -- # kill -0 2815369 00:04:31.396 11:32:57 rpc -- common/autotest_common.sh@959 -- # uname 00:04:31.396 11:32:57 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:31.396 11:32:57 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2815369 00:04:31.396 11:32:57 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:31.396 11:32:57 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:31.396 11:32:57 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2815369' 00:04:31.396 killing process with pid 2815369 00:04:31.396 11:32:57 rpc -- common/autotest_common.sh@973 -- # kill 2815369 00:04:31.396 11:32:57 rpc -- common/autotest_common.sh@978 -- # wait 2815369 00:04:33.925 00:04:33.925 real 0m4.965s 00:04:33.925 user 0m5.581s 00:04:33.925 sys 0m0.814s 00:04:33.925 11:32:59 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.925 11:32:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.925 ************************************ 00:04:33.925 END TEST rpc 00:04:33.925 ************************************ 00:04:33.925 11:32:59 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:33.925 11:32:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.925 11:32:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.925 11:32:59 -- common/autotest_common.sh@10 -- # set +x 00:04:33.925 ************************************ 00:04:33.925 START TEST skip_rpc 00:04:33.925 ************************************ 00:04:33.925 11:32:59 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:33.925 * Looking for test storage... 00:04:33.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:33.925 11:32:59 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:33.925 11:32:59 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:33.925 11:32:59 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:34.183 11:32:59 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:34.183 11:32:59 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.183 11:32:59 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.183 11:32:59 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.183 11:32:59 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.183 11:32:59 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.183 11:32:59 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.183 11:32:59 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.183 11:32:59 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.183 11:32:59 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.183 11:32:59 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.183 11:32:59 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.183 11:32:59 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:34.183 11:32:59 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:34.183 11:32:59 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.183 11:32:59 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.183 11:32:59 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:34.183 11:32:59 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:34.183 11:32:59 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.183 11:32:59 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:34.183 11:32:59 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.183 11:32:59 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:34.183 11:32:59 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:34.183 11:32:59 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.184 11:32:59 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:34.184 11:32:59 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.184 11:32:59 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.184 11:32:59 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.184 11:32:59 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:34.184 11:32:59 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.184 11:32:59 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:34.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.184 --rc genhtml_branch_coverage=1 00:04:34.184 --rc genhtml_function_coverage=1 00:04:34.184 --rc genhtml_legend=1 00:04:34.184 --rc geninfo_all_blocks=1 00:04:34.184 --rc geninfo_unexecuted_blocks=1 00:04:34.184 00:04:34.184 ' 00:04:34.184 11:32:59 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:34.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.184 --rc genhtml_branch_coverage=1 00:04:34.184 --rc genhtml_function_coverage=1 00:04:34.184 --rc genhtml_legend=1 00:04:34.184 --rc geninfo_all_blocks=1 00:04:34.184 --rc geninfo_unexecuted_blocks=1 00:04:34.184 00:04:34.184 ' 00:04:34.184 11:32:59 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:34.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.184 --rc genhtml_branch_coverage=1 00:04:34.184 --rc genhtml_function_coverage=1 00:04:34.184 --rc genhtml_legend=1 00:04:34.184 --rc geninfo_all_blocks=1 00:04:34.184 --rc geninfo_unexecuted_blocks=1 00:04:34.184 00:04:34.184 ' 00:04:34.184 11:32:59 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:34.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.184 --rc genhtml_branch_coverage=1 00:04:34.184 --rc genhtml_function_coverage=1 00:04:34.184 --rc genhtml_legend=1 00:04:34.184 --rc geninfo_all_blocks=1 00:04:34.184 --rc geninfo_unexecuted_blocks=1 00:04:34.184 00:04:34.184 ' 00:04:34.184 11:32:59 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:34.184 11:32:59 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:34.184 11:32:59 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:34.184 11:32:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.184 11:32:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.184 11:32:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.184 ************************************ 00:04:34.184 START TEST skip_rpc 00:04:34.184 ************************************ 00:04:34.184 11:32:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:34.184 11:32:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2816097 00:04:34.184 11:32:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:34.184 11:32:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:34.184 11:32:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:34.184 [2024-11-18 11:32:59.953286] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:34.184 [2024-11-18 11:32:59.953422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2816097 ] 00:04:34.442 [2024-11-18 11:33:00.101174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.442 [2024-11-18 11:33:00.244066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.772 11:33:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:39.772 11:33:04 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:39.772 11:33:04 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:39.772 11:33:04 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:39.772 11:33:04 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:39.772 11:33:04 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:39.772 11:33:04 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:39.772 11:33:04 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:39.772 11:33:04 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.772 11:33:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.772 11:33:04 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:39.772 11:33:04 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:39.772 11:33:04 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:39.772 11:33:04 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:39.772 11:33:04 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:39.772 11:33:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:39.772 11:33:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2816097 00:04:39.772 11:33:04 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2816097 ']' 00:04:39.772 11:33:04 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2816097 00:04:39.772 11:33:04 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:39.772 11:33:04 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.772 11:33:04 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2816097 00:04:39.772 11:33:04 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.772 11:33:04 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.772 11:33:04 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2816097' 00:04:39.772 killing process with pid 2816097 00:04:39.772 11:33:04 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2816097 00:04:39.772 11:33:04 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2816097 00:04:41.672 00:04:41.672 real 0m7.477s 00:04:41.672 user 0m6.975s 00:04:41.672 sys 0m0.498s 00:04:41.672 11:33:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.672 11:33:07 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.672 ************************************ 00:04:41.672 END TEST skip_rpc 00:04:41.672 ************************************ 00:04:41.672 11:33:07 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:41.672 11:33:07 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.672 11:33:07 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.672 11:33:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.672 ************************************ 00:04:41.672 START TEST skip_rpc_with_json 00:04:41.672 ************************************ 00:04:41.672 11:33:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:41.672 11:33:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:41.672 11:33:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2817332 00:04:41.672 11:33:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:41.672 11:33:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.672 11:33:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2817332 00:04:41.672 11:33:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2817332 ']' 00:04:41.672 11:33:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.672 11:33:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.672 11:33:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.672 11:33:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.672 11:33:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:41.672 [2024-11-18 11:33:07.485161] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:41.672 [2024-11-18 11:33:07.485296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2817332 ] 00:04:41.930 [2024-11-18 11:33:07.627468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.930 [2024-11-18 11:33:07.769220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.864 11:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.864 11:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:42.864 11:33:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:42.864 11:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.864 11:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.864 [2024-11-18 11:33:08.732149] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:42.864 request: 00:04:42.864 { 00:04:42.864 "trtype": "tcp", 00:04:42.864 "method": "nvmf_get_transports", 00:04:42.864 "req_id": 1 00:04:42.864 } 00:04:42.864 Got JSON-RPC error response 00:04:42.864 response: 00:04:42.864 { 00:04:42.864 "code": -19, 00:04:42.864 "message": "No such device" 00:04:42.864 } 00:04:42.864 11:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:42.864 11:33:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:42.864 11:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.864 11:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.864 [2024-11-18 11:33:08.740285] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:42.864 11:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.864 11:33:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:42.864 11:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.864 11:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.123 11:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.123 11:33:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:43.123 { 00:04:43.123 "subsystems": [ 00:04:43.123 { 00:04:43.123 "subsystem": "fsdev", 00:04:43.123 "config": [ 00:04:43.123 { 00:04:43.123 "method": "fsdev_set_opts", 00:04:43.123 "params": { 00:04:43.123 "fsdev_io_pool_size": 65535, 00:04:43.123 "fsdev_io_cache_size": 256 00:04:43.123 } 00:04:43.123 } 00:04:43.123 ] 00:04:43.123 }, 00:04:43.123 { 00:04:43.123 "subsystem": "keyring", 00:04:43.123 "config": [] 00:04:43.123 }, 00:04:43.123 { 00:04:43.123 "subsystem": "iobuf", 00:04:43.123 "config": [ 00:04:43.123 { 00:04:43.123 "method": "iobuf_set_options", 00:04:43.123 "params": { 00:04:43.123 "small_pool_count": 8192, 00:04:43.123 "large_pool_count": 1024, 00:04:43.123 "small_bufsize": 8192, 00:04:43.123 "large_bufsize": 135168, 00:04:43.123 "enable_numa": false 00:04:43.123 } 00:04:43.123 } 00:04:43.123 ] 00:04:43.123 }, 00:04:43.123 { 00:04:43.123 "subsystem": "sock", 00:04:43.123 "config": [ 00:04:43.123 { 00:04:43.123 "method": "sock_set_default_impl", 00:04:43.123 "params": { 00:04:43.123 "impl_name": "posix" 00:04:43.123 } 00:04:43.123 }, 00:04:43.123 { 00:04:43.123 "method": "sock_impl_set_options", 00:04:43.123 "params": { 00:04:43.123 "impl_name": "ssl", 00:04:43.123 "recv_buf_size": 4096, 00:04:43.123 "send_buf_size": 4096, 00:04:43.123 "enable_recv_pipe": true, 00:04:43.123 "enable_quickack": false, 00:04:43.123 "enable_placement_id": 0, 00:04:43.123 "enable_zerocopy_send_server": true, 00:04:43.123 "enable_zerocopy_send_client": false, 00:04:43.123 "zerocopy_threshold": 0, 00:04:43.123 "tls_version": 0, 00:04:43.123 "enable_ktls": false 00:04:43.123 } 00:04:43.123 }, 00:04:43.123 { 00:04:43.123 "method": "sock_impl_set_options", 00:04:43.123 "params": { 00:04:43.123 "impl_name": "posix", 00:04:43.123 "recv_buf_size": 2097152, 00:04:43.123 "send_buf_size": 2097152, 00:04:43.123 "enable_recv_pipe": true, 00:04:43.123 "enable_quickack": false, 00:04:43.123 "enable_placement_id": 0, 00:04:43.123 "enable_zerocopy_send_server": true, 00:04:43.123 "enable_zerocopy_send_client": false, 00:04:43.123 "zerocopy_threshold": 0, 00:04:43.123 "tls_version": 0, 00:04:43.123 "enable_ktls": false 00:04:43.123 } 00:04:43.123 } 00:04:43.123 ] 00:04:43.123 }, 00:04:43.123 { 00:04:43.123 "subsystem": "vmd", 00:04:43.123 "config": [] 00:04:43.123 }, 00:04:43.123 { 00:04:43.123 "subsystem": "accel", 00:04:43.123 "config": [ 00:04:43.123 { 00:04:43.123 "method": "accel_set_options", 00:04:43.123 "params": { 00:04:43.123 "small_cache_size": 128, 00:04:43.123 "large_cache_size": 16, 00:04:43.123 "task_count": 2048, 00:04:43.123 "sequence_count": 2048, 00:04:43.123 "buf_count": 2048 00:04:43.123 } 00:04:43.123 } 00:04:43.123 ] 00:04:43.123 }, 00:04:43.123 { 00:04:43.123 "subsystem": "bdev", 00:04:43.123 "config": [ 00:04:43.123 { 00:04:43.123 "method": "bdev_set_options", 00:04:43.123 "params": { 00:04:43.123 "bdev_io_pool_size": 65535, 00:04:43.123 "bdev_io_cache_size": 256, 00:04:43.123 "bdev_auto_examine": true, 00:04:43.123 "iobuf_small_cache_size": 128, 00:04:43.123 "iobuf_large_cache_size": 16 00:04:43.123 } 00:04:43.123 }, 00:04:43.123 { 00:04:43.123 "method": "bdev_raid_set_options", 00:04:43.123 "params": { 00:04:43.123 "process_window_size_kb": 1024, 00:04:43.123 "process_max_bandwidth_mb_sec": 0 00:04:43.123 } 00:04:43.123 }, 00:04:43.123 { 00:04:43.123 "method": "bdev_iscsi_set_options", 00:04:43.123 "params": { 00:04:43.123 "timeout_sec": 30 00:04:43.123 } 00:04:43.123 }, 00:04:43.123 { 00:04:43.123 "method": "bdev_nvme_set_options", 00:04:43.123 "params": { 00:04:43.123 "action_on_timeout": "none", 00:04:43.123 "timeout_us": 0, 00:04:43.123 "timeout_admin_us": 0, 00:04:43.123 "keep_alive_timeout_ms": 10000, 00:04:43.123 "arbitration_burst": 0, 00:04:43.123 "low_priority_weight": 0, 00:04:43.123 "medium_priority_weight": 0, 00:04:43.123 "high_priority_weight": 0, 00:04:43.123 "nvme_adminq_poll_period_us": 10000, 00:04:43.123 "nvme_ioq_poll_period_us": 0, 00:04:43.123 "io_queue_requests": 0, 00:04:43.123 "delay_cmd_submit": true, 00:04:43.123 "transport_retry_count": 4, 00:04:43.123 "bdev_retry_count": 3, 00:04:43.123 "transport_ack_timeout": 0, 00:04:43.123 "ctrlr_loss_timeout_sec": 0, 00:04:43.123 "reconnect_delay_sec": 0, 00:04:43.123 "fast_io_fail_timeout_sec": 0, 00:04:43.123 "disable_auto_failback": false, 00:04:43.123 "generate_uuids": false, 00:04:43.123 "transport_tos": 0, 00:04:43.123 "nvme_error_stat": false, 00:04:43.124 "rdma_srq_size": 0, 00:04:43.124 "io_path_stat": false, 00:04:43.124 "allow_accel_sequence": false, 00:04:43.124 "rdma_max_cq_size": 0, 00:04:43.124 "rdma_cm_event_timeout_ms": 0, 00:04:43.124 "dhchap_digests": [ 00:04:43.124 "sha256", 00:04:43.124 "sha384", 00:04:43.124 "sha512" 00:04:43.124 ], 00:04:43.124 "dhchap_dhgroups": [ 00:04:43.124 "null", 00:04:43.124 "ffdhe2048", 00:04:43.124 "ffdhe3072", 00:04:43.124 "ffdhe4096", 00:04:43.124 "ffdhe6144", 00:04:43.124 "ffdhe8192" 00:04:43.124 ] 00:04:43.124 } 00:04:43.124 }, 00:04:43.124 { 00:04:43.124 "method": "bdev_nvme_set_hotplug", 00:04:43.124 "params": { 00:04:43.124 "period_us": 100000, 00:04:43.124 "enable": false 00:04:43.124 } 00:04:43.124 }, 00:04:43.124 { 00:04:43.124 "method": "bdev_wait_for_examine" 00:04:43.124 } 00:04:43.124 ] 00:04:43.124 }, 00:04:43.124 { 00:04:43.124 "subsystem": "scsi", 00:04:43.124 "config": null 00:04:43.124 }, 00:04:43.124 { 00:04:43.124 "subsystem": "scheduler", 00:04:43.124 "config": [ 00:04:43.124 { 00:04:43.124 "method": "framework_set_scheduler", 00:04:43.124 "params": { 00:04:43.124 "name": "static" 00:04:43.124 } 00:04:43.124 } 00:04:43.124 ] 00:04:43.124 }, 00:04:43.124 { 00:04:43.124 "subsystem": "vhost_scsi", 00:04:43.124 "config": [] 00:04:43.124 }, 00:04:43.124 { 00:04:43.124 "subsystem": "vhost_blk", 00:04:43.124 "config": [] 00:04:43.124 }, 00:04:43.124 { 00:04:43.124 "subsystem": "ublk", 00:04:43.124 "config": [] 00:04:43.124 }, 00:04:43.124 { 00:04:43.124 "subsystem": "nbd", 00:04:43.124 "config": [] 00:04:43.124 }, 00:04:43.124 { 00:04:43.124 "subsystem": "nvmf", 00:04:43.124 "config": [ 00:04:43.124 { 00:04:43.124 "method": "nvmf_set_config", 00:04:43.124 "params": { 00:04:43.124 "discovery_filter": "match_any", 00:04:43.124 "admin_cmd_passthru": { 00:04:43.124 "identify_ctrlr": false 00:04:43.124 }, 00:04:43.124 "dhchap_digests": [ 00:04:43.124 "sha256", 00:04:43.124 "sha384", 00:04:43.124 "sha512" 00:04:43.124 ], 00:04:43.124 "dhchap_dhgroups": [ 00:04:43.124 "null", 00:04:43.124 "ffdhe2048", 00:04:43.124 "ffdhe3072", 00:04:43.124 "ffdhe4096", 00:04:43.124 "ffdhe6144", 00:04:43.124 "ffdhe8192" 00:04:43.124 ] 00:04:43.124 } 00:04:43.124 }, 00:04:43.124 { 00:04:43.124 "method": "nvmf_set_max_subsystems", 00:04:43.124 "params": { 00:04:43.124 "max_subsystems": 1024 00:04:43.124 } 00:04:43.124 }, 00:04:43.124 { 00:04:43.124 "method": "nvmf_set_crdt", 00:04:43.124 "params": { 00:04:43.124 "crdt1": 0, 00:04:43.124 "crdt2": 0, 00:04:43.124 "crdt3": 0 00:04:43.124 } 00:04:43.124 }, 00:04:43.124 { 00:04:43.124 "method": "nvmf_create_transport", 00:04:43.124 "params": { 00:04:43.124 "trtype": "TCP", 00:04:43.124 "max_queue_depth": 128, 00:04:43.124 "max_io_qpairs_per_ctrlr": 127, 00:04:43.124 "in_capsule_data_size": 4096, 00:04:43.124 "max_io_size": 131072, 00:04:43.124 "io_unit_size": 131072, 00:04:43.124 "max_aq_depth": 128, 00:04:43.124 "num_shared_buffers": 511, 00:04:43.124 "buf_cache_size": 4294967295, 00:04:43.124 "dif_insert_or_strip": false, 00:04:43.124 "zcopy": false, 00:04:43.124 "c2h_success": true, 00:04:43.124 "sock_priority": 0, 00:04:43.124 "abort_timeout_sec": 1, 00:04:43.124 "ack_timeout": 0, 00:04:43.124 "data_wr_pool_size": 0 00:04:43.124 } 00:04:43.124 } 00:04:43.124 ] 00:04:43.124 }, 00:04:43.124 { 00:04:43.124 "subsystem": "iscsi", 00:04:43.124 "config": [ 00:04:43.124 { 00:04:43.124 "method": "iscsi_set_options", 00:04:43.124 "params": { 00:04:43.124 "node_base": "iqn.2016-06.io.spdk", 00:04:43.124 "max_sessions": 128, 00:04:43.124 "max_connections_per_session": 2, 00:04:43.124 "max_queue_depth": 64, 00:04:43.124 "default_time2wait": 2, 00:04:43.124 "default_time2retain": 20, 00:04:43.124 "first_burst_length": 8192, 00:04:43.124 "immediate_data": true, 00:04:43.124 "allow_duplicated_isid": false, 00:04:43.124 "error_recovery_level": 0, 00:04:43.124 "nop_timeout": 60, 00:04:43.124 "nop_in_interval": 30, 00:04:43.124 "disable_chap": false, 00:04:43.124 "require_chap": false, 00:04:43.124 "mutual_chap": false, 00:04:43.124 "chap_group": 0, 00:04:43.124 "max_large_datain_per_connection": 64, 00:04:43.124 "max_r2t_per_connection": 4, 00:04:43.124 "pdu_pool_size": 36864, 00:04:43.124 "immediate_data_pool_size": 16384, 00:04:43.124 "data_out_pool_size": 2048 00:04:43.124 } 00:04:43.124 } 00:04:43.124 ] 00:04:43.124 } 00:04:43.124 ] 00:04:43.124 } 00:04:43.124 11:33:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:43.124 11:33:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2817332 00:04:43.124 11:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2817332 ']' 00:04:43.124 11:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2817332 00:04:43.124 11:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:43.124 11:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:43.124 11:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2817332 00:04:43.124 11:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:43.124 11:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:43.124 11:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2817332' 00:04:43.124 killing process with pid 2817332 00:04:43.124 11:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2817332 00:04:43.124 11:33:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2817332 00:04:45.654 11:33:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2818086 00:04:45.654 11:33:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:45.654 11:33:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:50.918 11:33:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2818086 00:04:50.918 11:33:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2818086 ']' 00:04:50.918 11:33:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2818086 00:04:50.918 11:33:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:50.918 11:33:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.918 11:33:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2818086 00:04:50.918 11:33:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:50.918 11:33:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:50.918 11:33:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2818086' 00:04:50.918 killing process with pid 2818086 00:04:50.918 11:33:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2818086 00:04:50.918 11:33:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2818086 00:04:53.448 11:33:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:53.448 11:33:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:53.448 00:04:53.448 real 0m11.478s 00:04:53.448 user 0m10.859s 00:04:53.448 sys 0m1.174s 00:04:53.448 11:33:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.448 11:33:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:53.448 ************************************ 00:04:53.448 END TEST skip_rpc_with_json 00:04:53.448 ************************************ 00:04:53.448 11:33:18 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:53.448 11:33:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.448 11:33:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.448 11:33:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.448 ************************************ 00:04:53.448 START TEST skip_rpc_with_delay 00:04:53.448 ************************************ 00:04:53.448 11:33:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:53.448 11:33:18 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.448 11:33:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:53.448 11:33:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.448 11:33:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.448 11:33:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:53.448 11:33:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.448 11:33:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:53.448 11:33:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.448 11:33:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:53.448 11:33:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.448 11:33:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:53.448 11:33:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.448 [2024-11-18 11:33:19.013394] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:53.448 11:33:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:53.448 11:33:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:53.448 11:33:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:53.448 11:33:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:53.448 00:04:53.448 real 0m0.151s 00:04:53.448 user 0m0.085s 00:04:53.448 sys 0m0.065s 00:04:53.448 11:33:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.448 11:33:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:53.448 ************************************ 00:04:53.448 END TEST skip_rpc_with_delay 00:04:53.448 ************************************ 00:04:53.448 11:33:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:53.448 11:33:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:53.448 11:33:19 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:53.448 11:33:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.448 11:33:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.448 11:33:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.448 ************************************ 00:04:53.448 START TEST exit_on_failed_rpc_init 00:04:53.448 ************************************ 00:04:53.448 11:33:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:53.448 11:33:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2819073 00:04:53.448 11:33:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:53.448 11:33:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2819073 00:04:53.448 11:33:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2819073 ']' 00:04:53.448 11:33:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.448 11:33:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.448 11:33:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.448 11:33:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.448 11:33:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:53.448 [2024-11-18 11:33:19.212266] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:53.448 [2024-11-18 11:33:19.212428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2819073 ] 00:04:53.706 [2024-11-18 11:33:19.360451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.706 [2024-11-18 11:33:19.497513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.640 11:33:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.640 11:33:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:54.640 11:33:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.640 11:33:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:54.640 11:33:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:54.641 11:33:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:54.641 11:33:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.641 11:33:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:54.641 11:33:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.641 11:33:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:54.641 11:33:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.641 11:33:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:54.641 11:33:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.641 11:33:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:54.641 11:33:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:54.899 [2024-11-18 11:33:20.582514] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:54.899 [2024-11-18 11:33:20.582693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2819214 ] 00:04:54.899 [2024-11-18 11:33:20.730967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.158 [2024-11-18 11:33:20.869242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.158 [2024-11-18 11:33:20.869407] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:55.158 [2024-11-18 11:33:20.869442] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:55.158 [2024-11-18 11:33:20.869461] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:55.416 11:33:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:55.416 11:33:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:55.416 11:33:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:55.416 11:33:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:55.416 11:33:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:55.416 11:33:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:55.416 11:33:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:55.416 11:33:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2819073 00:04:55.416 11:33:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2819073 ']' 00:04:55.416 11:33:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2819073 00:04:55.416 11:33:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:55.416 11:33:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:55.416 11:33:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2819073 00:04:55.416 11:33:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:55.416 11:33:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:55.416 11:33:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2819073' 00:04:55.416 killing process with pid 2819073 00:04:55.416 11:33:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2819073 00:04:55.416 11:33:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2819073 00:04:57.946 00:04:57.946 real 0m4.473s 00:04:57.946 user 0m4.969s 00:04:57.946 sys 0m0.795s 00:04:57.946 11:33:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.946 11:33:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:57.946 ************************************ 00:04:57.946 END TEST exit_on_failed_rpc_init 00:04:57.946 ************************************ 00:04:57.946 11:33:23 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:57.946 00:04:57.946 real 0m23.929s 00:04:57.946 user 0m23.084s 00:04:57.946 sys 0m2.704s 00:04:57.946 11:33:23 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.946 11:33:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.946 ************************************ 00:04:57.946 END TEST skip_rpc 00:04:57.946 ************************************ 00:04:57.946 11:33:23 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:57.946 11:33:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.946 11:33:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.946 11:33:23 -- common/autotest_common.sh@10 -- # set +x 00:04:57.946 ************************************ 00:04:57.946 START TEST rpc_client 00:04:57.946 ************************************ 00:04:57.946 11:33:23 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:57.946 * Looking for test storage... 00:04:57.946 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:57.946 11:33:23 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:57.946 11:33:23 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:57.946 11:33:23 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:57.946 11:33:23 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:57.946 11:33:23 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.946 11:33:23 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.946 11:33:23 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.946 11:33:23 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.946 11:33:23 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.946 11:33:23 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.946 11:33:23 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.946 11:33:23 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.946 11:33:23 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.946 11:33:23 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.946 11:33:23 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.946 11:33:23 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:57.946 11:33:23 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:57.946 11:33:23 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.946 11:33:23 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.946 11:33:23 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:57.946 11:33:23 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:57.946 11:33:23 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.946 11:33:23 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:57.946 11:33:23 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.946 11:33:23 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:57.946 11:33:23 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:57.946 11:33:23 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.946 11:33:23 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:57.946 11:33:23 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.946 11:33:23 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.946 11:33:23 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.946 11:33:23 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:57.946 11:33:23 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.946 11:33:23 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:57.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.946 --rc genhtml_branch_coverage=1 00:04:57.946 --rc genhtml_function_coverage=1 00:04:57.946 --rc genhtml_legend=1 00:04:57.946 --rc geninfo_all_blocks=1 00:04:57.946 --rc geninfo_unexecuted_blocks=1 00:04:57.946 00:04:57.946 ' 00:04:57.946 11:33:23 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:57.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.946 --rc genhtml_branch_coverage=1 00:04:57.946 --rc genhtml_function_coverage=1 00:04:57.946 --rc genhtml_legend=1 00:04:57.946 --rc geninfo_all_blocks=1 00:04:57.946 --rc geninfo_unexecuted_blocks=1 00:04:57.946 00:04:57.946 ' 00:04:57.946 11:33:23 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:57.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.946 --rc genhtml_branch_coverage=1 00:04:57.946 --rc genhtml_function_coverage=1 00:04:57.946 --rc genhtml_legend=1 00:04:57.946 --rc geninfo_all_blocks=1 00:04:57.946 --rc geninfo_unexecuted_blocks=1 00:04:57.946 00:04:57.946 ' 00:04:57.946 11:33:23 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:57.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.946 --rc genhtml_branch_coverage=1 00:04:57.946 --rc genhtml_function_coverage=1 00:04:57.946 --rc genhtml_legend=1 00:04:57.946 --rc geninfo_all_blocks=1 00:04:57.946 --rc geninfo_unexecuted_blocks=1 00:04:57.946 00:04:57.946 ' 00:04:57.946 11:33:23 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:57.946 OK 00:04:58.205 11:33:23 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:58.205 00:04:58.205 real 0m0.191s 00:04:58.205 user 0m0.111s 00:04:58.205 sys 0m0.090s 00:04:58.205 11:33:23 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.205 11:33:23 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:58.205 ************************************ 00:04:58.205 END TEST rpc_client 00:04:58.205 ************************************ 00:04:58.205 11:33:23 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:58.205 11:33:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.205 11:33:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.205 11:33:23 -- common/autotest_common.sh@10 -- # set +x 00:04:58.205 ************************************ 00:04:58.205 START TEST json_config 00:04:58.205 ************************************ 00:04:58.205 11:33:23 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:58.205 11:33:23 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:58.205 11:33:23 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:58.205 11:33:23 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:58.205 11:33:24 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:58.205 11:33:24 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.205 11:33:24 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.205 11:33:24 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.205 11:33:24 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.205 11:33:24 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.205 11:33:24 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.205 11:33:24 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.205 11:33:24 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.205 11:33:24 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.205 11:33:24 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.205 11:33:24 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.205 11:33:24 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:58.205 11:33:24 json_config -- scripts/common.sh@345 -- # : 1 00:04:58.205 11:33:24 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.205 11:33:24 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.205 11:33:24 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:58.205 11:33:24 json_config -- scripts/common.sh@353 -- # local d=1 00:04:58.205 11:33:24 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.205 11:33:24 json_config -- scripts/common.sh@355 -- # echo 1 00:04:58.205 11:33:24 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.205 11:33:24 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:58.205 11:33:24 json_config -- scripts/common.sh@353 -- # local d=2 00:04:58.205 11:33:24 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.205 11:33:24 json_config -- scripts/common.sh@355 -- # echo 2 00:04:58.205 11:33:24 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.205 11:33:24 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.205 11:33:24 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.205 11:33:24 json_config -- scripts/common.sh@368 -- # return 0 00:04:58.205 11:33:24 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.205 11:33:24 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:58.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.205 --rc genhtml_branch_coverage=1 00:04:58.205 --rc genhtml_function_coverage=1 00:04:58.205 --rc genhtml_legend=1 00:04:58.205 --rc geninfo_all_blocks=1 00:04:58.205 --rc geninfo_unexecuted_blocks=1 00:04:58.205 00:04:58.205 ' 00:04:58.205 11:33:24 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:58.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.205 --rc genhtml_branch_coverage=1 00:04:58.205 --rc genhtml_function_coverage=1 00:04:58.205 --rc genhtml_legend=1 00:04:58.205 --rc geninfo_all_blocks=1 00:04:58.205 --rc geninfo_unexecuted_blocks=1 00:04:58.205 00:04:58.205 ' 00:04:58.205 11:33:24 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:58.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.205 --rc genhtml_branch_coverage=1 00:04:58.205 --rc genhtml_function_coverage=1 00:04:58.205 --rc genhtml_legend=1 00:04:58.205 --rc geninfo_all_blocks=1 00:04:58.205 --rc geninfo_unexecuted_blocks=1 00:04:58.205 00:04:58.205 ' 00:04:58.205 11:33:24 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:58.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.205 --rc genhtml_branch_coverage=1 00:04:58.205 --rc genhtml_function_coverage=1 00:04:58.205 --rc genhtml_legend=1 00:04:58.205 --rc geninfo_all_blocks=1 00:04:58.205 --rc geninfo_unexecuted_blocks=1 00:04:58.205 00:04:58.205 ' 00:04:58.205 11:33:24 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:58.205 11:33:24 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:58.205 11:33:24 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:58.205 11:33:24 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:58.205 11:33:24 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:58.205 11:33:24 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:58.205 11:33:24 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:58.205 11:33:24 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:58.205 11:33:24 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:58.205 11:33:24 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:58.205 11:33:24 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:58.205 11:33:24 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:58.205 11:33:24 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:58.205 11:33:24 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:58.205 11:33:24 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:58.205 11:33:24 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:58.205 11:33:24 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:58.205 11:33:24 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:58.205 11:33:24 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:58.205 11:33:24 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:58.205 11:33:24 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:58.205 11:33:24 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:58.205 11:33:24 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:58.206 11:33:24 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.206 11:33:24 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.206 11:33:24 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.206 11:33:24 json_config -- paths/export.sh@5 -- # export PATH 00:04:58.206 11:33:24 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.206 11:33:24 json_config -- nvmf/common.sh@51 -- # : 0 00:04:58.206 11:33:24 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:58.206 11:33:24 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:58.206 11:33:24 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:58.206 11:33:24 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:58.206 11:33:24 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:58.206 11:33:24 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:58.206 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:58.206 11:33:24 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:58.206 11:33:24 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:58.206 11:33:24 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:58.206 11:33:24 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:58.206 11:33:24 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:58.206 11:33:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:58.206 11:33:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:58.206 11:33:24 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:58.206 11:33:24 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:58.206 11:33:24 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:58.206 11:33:24 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:58.206 11:33:24 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:58.206 11:33:24 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:58.206 11:33:24 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:58.206 11:33:24 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:58.206 11:33:24 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:58.206 11:33:24 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:58.206 11:33:24 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:58.206 11:33:24 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:58.206 INFO: JSON configuration test init 00:04:58.206 11:33:24 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:58.206 11:33:24 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:58.206 11:33:24 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:58.206 11:33:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.206 11:33:24 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:58.206 11:33:24 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:58.206 11:33:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.206 11:33:24 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:58.206 11:33:24 json_config -- json_config/common.sh@9 -- # local app=target 00:04:58.206 11:33:24 json_config -- json_config/common.sh@10 -- # shift 00:04:58.206 11:33:24 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:58.206 11:33:24 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:58.206 11:33:24 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:58.206 11:33:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:58.206 11:33:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:58.206 11:33:24 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2819854 00:04:58.206 11:33:24 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:58.206 11:33:24 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:58.206 Waiting for target to run... 00:04:58.206 11:33:24 json_config -- json_config/common.sh@25 -- # waitforlisten 2819854 /var/tmp/spdk_tgt.sock 00:04:58.206 11:33:24 json_config -- common/autotest_common.sh@835 -- # '[' -z 2819854 ']' 00:04:58.206 11:33:24 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:58.206 11:33:24 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.206 11:33:24 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:58.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:58.206 11:33:24 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.206 11:33:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.465 [2024-11-18 11:33:24.154273] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:58.465 [2024-11-18 11:33:24.154422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2819854 ] 00:04:59.031 [2024-11-18 11:33:24.752354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.031 [2024-11-18 11:33:24.881169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.290 11:33:25 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.290 11:33:25 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:59.290 11:33:25 json_config -- json_config/common.sh@26 -- # echo '' 00:04:59.290 00:04:59.290 11:33:25 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:59.290 11:33:25 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:59.290 11:33:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:59.290 11:33:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.290 11:33:25 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:59.290 11:33:25 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:59.290 11:33:25 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:59.290 11:33:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.290 11:33:25 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:59.290 11:33:25 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:59.290 11:33:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:03.484 11:33:29 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:03.484 11:33:29 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:03.484 11:33:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:03.484 11:33:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.484 11:33:29 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:03.484 11:33:29 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:03.484 11:33:29 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:03.484 11:33:29 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:03.484 11:33:29 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:03.484 11:33:29 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:03.484 11:33:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:03.484 11:33:29 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:03.484 11:33:29 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:03.484 11:33:29 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:03.484 11:33:29 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:03.484 11:33:29 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:03.484 11:33:29 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:03.484 11:33:29 json_config -- json_config/json_config.sh@54 -- # sort 00:05:03.484 11:33:29 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:03.484 11:33:29 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:03.484 11:33:29 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:03.484 11:33:29 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:03.484 11:33:29 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:03.484 11:33:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.744 11:33:29 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:03.744 11:33:29 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:03.744 11:33:29 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:03.744 11:33:29 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:03.744 11:33:29 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:03.744 11:33:29 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:03.744 11:33:29 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:03.744 11:33:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:03.744 11:33:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.744 11:33:29 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:03.744 11:33:29 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:03.744 11:33:29 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:03.744 11:33:29 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:03.744 11:33:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:04.001 MallocForNvmf0 00:05:04.001 11:33:29 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:04.001 11:33:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:04.260 MallocForNvmf1 00:05:04.260 11:33:29 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:04.260 11:33:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:04.517 [2024-11-18 11:33:30.193967] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:04.517 11:33:30 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:04.517 11:33:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:04.775 11:33:30 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:04.775 11:33:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:05.033 11:33:30 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:05.033 11:33:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:05.291 11:33:31 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:05.291 11:33:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:05.549 [2024-11-18 11:33:31.293994] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:05.549 11:33:31 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:05.549 11:33:31 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:05.549 11:33:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.549 11:33:31 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:05.549 11:33:31 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:05.549 11:33:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.549 11:33:31 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:05.549 11:33:31 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:05.549 11:33:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:05.807 MallocBdevForConfigChangeCheck 00:05:05.807 11:33:31 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:05.807 11:33:31 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:05.807 11:33:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.807 11:33:31 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:05.807 11:33:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:06.372 11:33:32 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:06.372 INFO: shutting down applications... 00:05:06.372 11:33:32 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:06.372 11:33:32 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:06.372 11:33:32 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:06.372 11:33:32 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:08.268 Calling clear_iscsi_subsystem 00:05:08.268 Calling clear_nvmf_subsystem 00:05:08.268 Calling clear_nbd_subsystem 00:05:08.268 Calling clear_ublk_subsystem 00:05:08.268 Calling clear_vhost_blk_subsystem 00:05:08.268 Calling clear_vhost_scsi_subsystem 00:05:08.268 Calling clear_bdev_subsystem 00:05:08.268 11:33:33 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:08.268 11:33:33 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:08.268 11:33:33 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:08.268 11:33:33 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:08.268 11:33:33 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:08.268 11:33:33 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:08.268 11:33:34 json_config -- json_config/json_config.sh@352 -- # break 00:05:08.268 11:33:34 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:08.268 11:33:34 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:08.268 11:33:34 json_config -- json_config/common.sh@31 -- # local app=target 00:05:08.268 11:33:34 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:08.268 11:33:34 json_config -- json_config/common.sh@35 -- # [[ -n 2819854 ]] 00:05:08.268 11:33:34 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2819854 00:05:08.268 11:33:34 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:08.268 11:33:34 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:08.268 11:33:34 json_config -- json_config/common.sh@41 -- # kill -0 2819854 00:05:08.268 11:33:34 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:08.833 11:33:34 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:08.833 11:33:34 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:08.833 11:33:34 json_config -- json_config/common.sh@41 -- # kill -0 2819854 00:05:08.833 11:33:34 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:09.399 11:33:35 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:09.399 11:33:35 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:09.399 11:33:35 json_config -- json_config/common.sh@41 -- # kill -0 2819854 00:05:09.399 11:33:35 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:09.965 11:33:35 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:09.965 11:33:35 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:09.965 11:33:35 json_config -- json_config/common.sh@41 -- # kill -0 2819854 00:05:09.965 11:33:35 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:09.965 11:33:35 json_config -- json_config/common.sh@43 -- # break 00:05:09.965 11:33:35 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:09.965 11:33:35 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:09.965 SPDK target shutdown done 00:05:09.965 11:33:35 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:09.965 INFO: relaunching applications... 00:05:09.965 11:33:35 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:09.965 11:33:35 json_config -- json_config/common.sh@9 -- # local app=target 00:05:09.965 11:33:35 json_config -- json_config/common.sh@10 -- # shift 00:05:09.965 11:33:35 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:09.965 11:33:35 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:09.965 11:33:35 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:09.965 11:33:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.965 11:33:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.965 11:33:35 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2821326 00:05:09.965 11:33:35 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:09.965 11:33:35 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:09.965 Waiting for target to run... 00:05:09.965 11:33:35 json_config -- json_config/common.sh@25 -- # waitforlisten 2821326 /var/tmp/spdk_tgt.sock 00:05:09.965 11:33:35 json_config -- common/autotest_common.sh@835 -- # '[' -z 2821326 ']' 00:05:09.965 11:33:35 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:09.965 11:33:35 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.965 11:33:35 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:09.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:09.966 11:33:35 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.966 11:33:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.966 [2024-11-18 11:33:35.747837] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:09.966 [2024-11-18 11:33:35.747988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2821326 ] 00:05:10.532 [2024-11-18 11:33:36.351629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.791 [2024-11-18 11:33:36.481675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.977 [2024-11-18 11:33:40.266835] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:14.977 [2024-11-18 11:33:40.299407] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:14.977 11:33:40 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.977 11:33:40 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:14.977 11:33:40 json_config -- json_config/common.sh@26 -- # echo '' 00:05:14.977 00:05:14.977 11:33:40 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:14.977 11:33:40 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:14.977 INFO: Checking if target configuration is the same... 00:05:14.977 11:33:40 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.977 11:33:40 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:14.977 11:33:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:14.977 + '[' 2 -ne 2 ']' 00:05:14.977 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:14.977 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:14.977 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:14.977 +++ basename /dev/fd/62 00:05:14.977 ++ mktemp /tmp/62.XXX 00:05:14.977 + tmp_file_1=/tmp/62.Ktm 00:05:14.977 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.977 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:14.977 + tmp_file_2=/tmp/spdk_tgt_config.json.4sC 00:05:14.977 + ret=0 00:05:14.977 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:14.977 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:14.977 + diff -u /tmp/62.Ktm /tmp/spdk_tgt_config.json.4sC 00:05:14.977 + echo 'INFO: JSON config files are the same' 00:05:14.977 INFO: JSON config files are the same 00:05:14.977 + rm /tmp/62.Ktm /tmp/spdk_tgt_config.json.4sC 00:05:14.977 + exit 0 00:05:14.977 11:33:40 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:14.977 11:33:40 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:14.977 INFO: changing configuration and checking if this can be detected... 00:05:14.977 11:33:40 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:14.977 11:33:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:15.235 11:33:41 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:15.235 11:33:41 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:15.235 11:33:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:15.235 + '[' 2 -ne 2 ']' 00:05:15.235 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:15.235 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:15.235 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:15.235 +++ basename /dev/fd/62 00:05:15.235 ++ mktemp /tmp/62.XXX 00:05:15.235 + tmp_file_1=/tmp/62.dP5 00:05:15.235 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:15.235 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:15.235 + tmp_file_2=/tmp/spdk_tgt_config.json.6qt 00:05:15.235 + ret=0 00:05:15.235 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:15.801 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:15.801 + diff -u /tmp/62.dP5 /tmp/spdk_tgt_config.json.6qt 00:05:15.801 + ret=1 00:05:15.801 + echo '=== Start of file: /tmp/62.dP5 ===' 00:05:15.801 + cat /tmp/62.dP5 00:05:15.801 + echo '=== End of file: /tmp/62.dP5 ===' 00:05:15.801 + echo '' 00:05:15.801 + echo '=== Start of file: /tmp/spdk_tgt_config.json.6qt ===' 00:05:15.801 + cat /tmp/spdk_tgt_config.json.6qt 00:05:15.801 + echo '=== End of file: /tmp/spdk_tgt_config.json.6qt ===' 00:05:15.801 + echo '' 00:05:15.801 + rm /tmp/62.dP5 /tmp/spdk_tgt_config.json.6qt 00:05:15.801 + exit 1 00:05:15.801 11:33:41 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:15.801 INFO: configuration change detected. 00:05:15.801 11:33:41 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:15.801 11:33:41 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:15.801 11:33:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:15.801 11:33:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.801 11:33:41 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:15.801 11:33:41 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:15.801 11:33:41 json_config -- json_config/json_config.sh@324 -- # [[ -n 2821326 ]] 00:05:15.801 11:33:41 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:15.801 11:33:41 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:15.801 11:33:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:15.801 11:33:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.801 11:33:41 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:15.801 11:33:41 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:15.801 11:33:41 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:15.801 11:33:41 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:15.801 11:33:41 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:15.801 11:33:41 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:15.801 11:33:41 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:15.801 11:33:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.801 11:33:41 json_config -- json_config/json_config.sh@330 -- # killprocess 2821326 00:05:15.801 11:33:41 json_config -- common/autotest_common.sh@954 -- # '[' -z 2821326 ']' 00:05:15.801 11:33:41 json_config -- common/autotest_common.sh@958 -- # kill -0 2821326 00:05:15.801 11:33:41 json_config -- common/autotest_common.sh@959 -- # uname 00:05:15.801 11:33:41 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.801 11:33:41 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2821326 00:05:15.801 11:33:41 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:15.801 11:33:41 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:15.801 11:33:41 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2821326' 00:05:15.801 killing process with pid 2821326 00:05:15.801 11:33:41 json_config -- common/autotest_common.sh@973 -- # kill 2821326 00:05:15.801 11:33:41 json_config -- common/autotest_common.sh@978 -- # wait 2821326 00:05:18.328 11:33:44 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:18.328 11:33:44 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:18.328 11:33:44 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:18.328 11:33:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.328 11:33:44 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:18.328 11:33:44 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:18.328 INFO: Success 00:05:18.328 00:05:18.328 real 0m20.135s 00:05:18.328 user 0m21.243s 00:05:18.328 sys 0m3.283s 00:05:18.328 11:33:44 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.328 11:33:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.328 ************************************ 00:05:18.328 END TEST json_config 00:05:18.328 ************************************ 00:05:18.328 11:33:44 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:18.328 11:33:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.328 11:33:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.328 11:33:44 -- common/autotest_common.sh@10 -- # set +x 00:05:18.328 ************************************ 00:05:18.328 START TEST json_config_extra_key 00:05:18.328 ************************************ 00:05:18.328 11:33:44 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:18.328 11:33:44 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:18.329 11:33:44 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:18.329 11:33:44 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:18.329 11:33:44 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:18.329 11:33:44 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.329 11:33:44 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.329 11:33:44 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.329 11:33:44 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.329 11:33:44 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.329 11:33:44 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.329 11:33:44 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.329 11:33:44 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.329 11:33:44 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.329 11:33:44 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.329 11:33:44 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.329 11:33:44 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:18.329 11:33:44 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:18.329 11:33:44 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.329 11:33:44 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.329 11:33:44 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:18.329 11:33:44 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:18.329 11:33:44 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.329 11:33:44 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:18.329 11:33:44 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.329 11:33:44 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:18.329 11:33:44 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:18.329 11:33:44 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.329 11:33:44 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:18.329 11:33:44 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.329 11:33:44 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.329 11:33:44 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.329 11:33:44 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:18.329 11:33:44 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.329 11:33:44 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:18.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.329 --rc genhtml_branch_coverage=1 00:05:18.329 --rc genhtml_function_coverage=1 00:05:18.329 --rc genhtml_legend=1 00:05:18.329 --rc geninfo_all_blocks=1 00:05:18.329 --rc geninfo_unexecuted_blocks=1 00:05:18.329 00:05:18.329 ' 00:05:18.329 11:33:44 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:18.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.329 --rc genhtml_branch_coverage=1 00:05:18.329 --rc genhtml_function_coverage=1 00:05:18.329 --rc genhtml_legend=1 00:05:18.329 --rc geninfo_all_blocks=1 00:05:18.329 --rc geninfo_unexecuted_blocks=1 00:05:18.329 00:05:18.329 ' 00:05:18.329 11:33:44 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:18.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.329 --rc genhtml_branch_coverage=1 00:05:18.329 --rc genhtml_function_coverage=1 00:05:18.329 --rc genhtml_legend=1 00:05:18.329 --rc geninfo_all_blocks=1 00:05:18.329 --rc geninfo_unexecuted_blocks=1 00:05:18.329 00:05:18.329 ' 00:05:18.329 11:33:44 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:18.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.329 --rc genhtml_branch_coverage=1 00:05:18.329 --rc genhtml_function_coverage=1 00:05:18.329 --rc genhtml_legend=1 00:05:18.329 --rc geninfo_all_blocks=1 00:05:18.329 --rc geninfo_unexecuted_blocks=1 00:05:18.329 00:05:18.329 ' 00:05:18.329 11:33:44 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:18.329 11:33:44 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:18.329 11:33:44 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:18.329 11:33:44 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:18.329 11:33:44 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:18.588 11:33:44 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:18.588 11:33:44 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:18.588 11:33:44 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:18.588 11:33:44 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:18.588 11:33:44 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:18.588 11:33:44 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:18.588 11:33:44 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:18.588 11:33:44 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:18.588 11:33:44 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:18.588 11:33:44 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:18.588 11:33:44 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:18.588 11:33:44 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:18.588 11:33:44 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:18.588 11:33:44 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:18.588 11:33:44 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:18.588 11:33:44 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:18.588 11:33:44 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:18.588 11:33:44 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:18.588 11:33:44 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.588 11:33:44 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.588 11:33:44 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.588 11:33:44 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:18.588 11:33:44 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.588 11:33:44 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:18.588 11:33:44 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:18.588 11:33:44 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:18.588 11:33:44 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:18.588 11:33:44 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:18.588 11:33:44 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:18.588 11:33:44 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:18.588 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:18.588 11:33:44 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:18.588 11:33:44 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:18.588 11:33:44 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:18.588 11:33:44 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:18.588 11:33:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:18.588 11:33:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:18.588 11:33:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:18.588 11:33:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:18.588 11:33:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:18.588 11:33:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:18.588 11:33:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:18.588 11:33:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:18.588 11:33:44 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:18.588 11:33:44 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:18.588 INFO: launching applications... 00:05:18.588 11:33:44 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:18.588 11:33:44 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:18.588 11:33:44 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:18.588 11:33:44 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:18.588 11:33:44 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:18.588 11:33:44 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:18.588 11:33:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:18.588 11:33:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:18.588 11:33:44 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2822487 00:05:18.588 11:33:44 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:18.588 11:33:44 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:18.588 Waiting for target to run... 00:05:18.588 11:33:44 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2822487 /var/tmp/spdk_tgt.sock 00:05:18.588 11:33:44 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2822487 ']' 00:05:18.588 11:33:44 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:18.588 11:33:44 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.588 11:33:44 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:18.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:18.588 11:33:44 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.588 11:33:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:18.588 [2024-11-18 11:33:44.328955] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:18.588 [2024-11-18 11:33:44.329095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2822487 ] 00:05:19.156 [2024-11-18 11:33:44.917609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.419 [2024-11-18 11:33:45.050143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.052 11:33:45 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.052 11:33:45 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:20.052 11:33:45 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:20.052 00:05:20.052 11:33:45 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:20.052 INFO: shutting down applications... 00:05:20.052 11:33:45 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:20.052 11:33:45 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:20.052 11:33:45 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:20.052 11:33:45 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2822487 ]] 00:05:20.052 11:33:45 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2822487 00:05:20.052 11:33:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:20.052 11:33:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.052 11:33:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2822487 00:05:20.052 11:33:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:20.617 11:33:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:20.617 11:33:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.617 11:33:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2822487 00:05:20.617 11:33:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:21.183 11:33:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:21.183 11:33:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.183 11:33:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2822487 00:05:21.183 11:33:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:21.441 11:33:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:21.441 11:33:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.441 11:33:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2822487 00:05:21.441 11:33:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:22.008 11:33:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:22.008 11:33:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:22.008 11:33:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2822487 00:05:22.008 11:33:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:22.574 11:33:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:22.574 11:33:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:22.574 11:33:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2822487 00:05:22.574 11:33:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:23.140 11:33:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:23.140 11:33:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:23.140 11:33:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2822487 00:05:23.140 11:33:48 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:23.141 11:33:48 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:23.141 11:33:48 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:23.141 11:33:48 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:23.141 SPDK target shutdown done 00:05:23.141 11:33:48 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:23.141 Success 00:05:23.141 00:05:23.141 real 0m4.755s 00:05:23.141 user 0m4.268s 00:05:23.141 sys 0m0.854s 00:05:23.141 11:33:48 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.141 11:33:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:23.141 ************************************ 00:05:23.141 END TEST json_config_extra_key 00:05:23.141 ************************************ 00:05:23.141 11:33:48 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:23.141 11:33:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.141 11:33:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.141 11:33:48 -- common/autotest_common.sh@10 -- # set +x 00:05:23.141 ************************************ 00:05:23.141 START TEST alias_rpc 00:05:23.141 ************************************ 00:05:23.141 11:33:48 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:23.141 * Looking for test storage... 00:05:23.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:23.141 11:33:48 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:23.141 11:33:48 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:23.141 11:33:48 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:23.141 11:33:49 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:23.141 11:33:49 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.141 11:33:49 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.141 11:33:49 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.141 11:33:49 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.141 11:33:49 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.141 11:33:49 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.141 11:33:49 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.141 11:33:49 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.141 11:33:49 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.141 11:33:49 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.141 11:33:49 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.141 11:33:49 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:23.141 11:33:49 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:23.141 11:33:49 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.141 11:33:49 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.141 11:33:49 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:23.141 11:33:49 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:23.141 11:33:49 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.141 11:33:49 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:23.141 11:33:49 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.399 11:33:49 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:23.399 11:33:49 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:23.399 11:33:49 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.399 11:33:49 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:23.399 11:33:49 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.399 11:33:49 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.399 11:33:49 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.399 11:33:49 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:23.399 11:33:49 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.399 11:33:49 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:23.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.400 --rc genhtml_branch_coverage=1 00:05:23.400 --rc genhtml_function_coverage=1 00:05:23.400 --rc genhtml_legend=1 00:05:23.400 --rc geninfo_all_blocks=1 00:05:23.400 --rc geninfo_unexecuted_blocks=1 00:05:23.400 00:05:23.400 ' 00:05:23.400 11:33:49 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:23.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.400 --rc genhtml_branch_coverage=1 00:05:23.400 --rc genhtml_function_coverage=1 00:05:23.400 --rc genhtml_legend=1 00:05:23.400 --rc geninfo_all_blocks=1 00:05:23.400 --rc geninfo_unexecuted_blocks=1 00:05:23.400 00:05:23.400 ' 00:05:23.400 11:33:49 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:23.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.400 --rc genhtml_branch_coverage=1 00:05:23.400 --rc genhtml_function_coverage=1 00:05:23.400 --rc genhtml_legend=1 00:05:23.400 --rc geninfo_all_blocks=1 00:05:23.400 --rc geninfo_unexecuted_blocks=1 00:05:23.400 00:05:23.400 ' 00:05:23.400 11:33:49 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:23.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.400 --rc genhtml_branch_coverage=1 00:05:23.400 --rc genhtml_function_coverage=1 00:05:23.400 --rc genhtml_legend=1 00:05:23.400 --rc geninfo_all_blocks=1 00:05:23.400 --rc geninfo_unexecuted_blocks=1 00:05:23.400 00:05:23.400 ' 00:05:23.400 11:33:49 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:23.400 11:33:49 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2823110 00:05:23.400 11:33:49 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.400 11:33:49 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2823110 00:05:23.400 11:33:49 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2823110 ']' 00:05:23.400 11:33:49 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.400 11:33:49 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.400 11:33:49 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.400 11:33:49 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.400 11:33:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.400 [2024-11-18 11:33:49.129081] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:23.400 [2024-11-18 11:33:49.129235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2823110 ] 00:05:23.400 [2024-11-18 11:33:49.265701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.657 [2024-11-18 11:33:49.399964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.591 11:33:50 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.591 11:33:50 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:24.591 11:33:50 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:24.848 11:33:50 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2823110 00:05:24.848 11:33:50 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2823110 ']' 00:05:24.848 11:33:50 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2823110 00:05:24.848 11:33:50 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:24.848 11:33:50 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.848 11:33:50 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2823110 00:05:24.848 11:33:50 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.848 11:33:50 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.848 11:33:50 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2823110' 00:05:24.848 killing process with pid 2823110 00:05:24.848 11:33:50 alias_rpc -- common/autotest_common.sh@973 -- # kill 2823110 00:05:24.848 11:33:50 alias_rpc -- common/autotest_common.sh@978 -- # wait 2823110 00:05:27.377 00:05:27.377 real 0m4.230s 00:05:27.377 user 0m4.388s 00:05:27.377 sys 0m0.674s 00:05:27.377 11:33:53 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.377 11:33:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.377 ************************************ 00:05:27.377 END TEST alias_rpc 00:05:27.377 ************************************ 00:05:27.377 11:33:53 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:27.377 11:33:53 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:27.377 11:33:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.377 11:33:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.377 11:33:53 -- common/autotest_common.sh@10 -- # set +x 00:05:27.377 ************************************ 00:05:27.377 START TEST spdkcli_tcp 00:05:27.377 ************************************ 00:05:27.377 11:33:53 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:27.377 * Looking for test storage... 00:05:27.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:27.377 11:33:53 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:27.377 11:33:53 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:27.377 11:33:53 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:27.635 11:33:53 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:27.635 11:33:53 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.635 11:33:53 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.635 11:33:53 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.635 11:33:53 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.635 11:33:53 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.635 11:33:53 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.635 11:33:53 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.635 11:33:53 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.635 11:33:53 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.635 11:33:53 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.635 11:33:53 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.635 11:33:53 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:27.635 11:33:53 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:27.635 11:33:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.635 11:33:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.635 11:33:53 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:27.635 11:33:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:27.635 11:33:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.635 11:33:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:27.635 11:33:53 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.635 11:33:53 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:27.635 11:33:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:27.635 11:33:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.635 11:33:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:27.635 11:33:53 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.635 11:33:53 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.635 11:33:53 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.635 11:33:53 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:27.635 11:33:53 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.635 11:33:53 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:27.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.635 --rc genhtml_branch_coverage=1 00:05:27.635 --rc genhtml_function_coverage=1 00:05:27.635 --rc genhtml_legend=1 00:05:27.635 --rc geninfo_all_blocks=1 00:05:27.635 --rc geninfo_unexecuted_blocks=1 00:05:27.635 00:05:27.635 ' 00:05:27.635 11:33:53 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:27.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.635 --rc genhtml_branch_coverage=1 00:05:27.635 --rc genhtml_function_coverage=1 00:05:27.635 --rc genhtml_legend=1 00:05:27.635 --rc geninfo_all_blocks=1 00:05:27.635 --rc geninfo_unexecuted_blocks=1 00:05:27.635 00:05:27.635 ' 00:05:27.635 11:33:53 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:27.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.636 --rc genhtml_branch_coverage=1 00:05:27.636 --rc genhtml_function_coverage=1 00:05:27.636 --rc genhtml_legend=1 00:05:27.636 --rc geninfo_all_blocks=1 00:05:27.636 --rc geninfo_unexecuted_blocks=1 00:05:27.636 00:05:27.636 ' 00:05:27.636 11:33:53 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:27.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.636 --rc genhtml_branch_coverage=1 00:05:27.636 --rc genhtml_function_coverage=1 00:05:27.636 --rc genhtml_legend=1 00:05:27.636 --rc geninfo_all_blocks=1 00:05:27.636 --rc geninfo_unexecuted_blocks=1 00:05:27.636 00:05:27.636 ' 00:05:27.636 11:33:53 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:27.636 11:33:53 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:27.636 11:33:53 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:27.636 11:33:53 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:27.636 11:33:53 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:27.636 11:33:53 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:27.636 11:33:53 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:27.636 11:33:53 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:27.636 11:33:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:27.636 11:33:53 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2823695 00:05:27.636 11:33:53 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2823695 00:05:27.636 11:33:53 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:27.636 11:33:53 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2823695 ']' 00:05:27.636 11:33:53 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.636 11:33:53 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.636 11:33:53 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.636 11:33:53 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.636 11:33:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:27.636 [2024-11-18 11:33:53.403426] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:27.636 [2024-11-18 11:33:53.403594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2823695 ] 00:05:27.894 [2024-11-18 11:33:53.548834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:27.894 [2024-11-18 11:33:53.689401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.894 [2024-11-18 11:33:53.689401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.828 11:33:54 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.828 11:33:54 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:28.828 11:33:54 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2823845 00:05:28.828 11:33:54 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:28.828 11:33:54 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:29.086 [ 00:05:29.086 "bdev_malloc_delete", 00:05:29.086 "bdev_malloc_create", 00:05:29.086 "bdev_null_resize", 00:05:29.086 "bdev_null_delete", 00:05:29.086 "bdev_null_create", 00:05:29.086 "bdev_nvme_cuse_unregister", 00:05:29.086 "bdev_nvme_cuse_register", 00:05:29.086 "bdev_opal_new_user", 00:05:29.086 "bdev_opal_set_lock_state", 00:05:29.086 "bdev_opal_delete", 00:05:29.086 "bdev_opal_get_info", 00:05:29.087 "bdev_opal_create", 00:05:29.087 "bdev_nvme_opal_revert", 00:05:29.087 "bdev_nvme_opal_init", 00:05:29.087 "bdev_nvme_send_cmd", 00:05:29.087 "bdev_nvme_set_keys", 00:05:29.087 "bdev_nvme_get_path_iostat", 00:05:29.087 "bdev_nvme_get_mdns_discovery_info", 00:05:29.087 "bdev_nvme_stop_mdns_discovery", 00:05:29.087 "bdev_nvme_start_mdns_discovery", 00:05:29.087 "bdev_nvme_set_multipath_policy", 00:05:29.087 "bdev_nvme_set_preferred_path", 00:05:29.087 "bdev_nvme_get_io_paths", 00:05:29.087 "bdev_nvme_remove_error_injection", 00:05:29.087 "bdev_nvme_add_error_injection", 00:05:29.087 "bdev_nvme_get_discovery_info", 00:05:29.087 "bdev_nvme_stop_discovery", 00:05:29.087 "bdev_nvme_start_discovery", 00:05:29.087 "bdev_nvme_get_controller_health_info", 00:05:29.087 "bdev_nvme_disable_controller", 00:05:29.087 "bdev_nvme_enable_controller", 00:05:29.087 "bdev_nvme_reset_controller", 00:05:29.087 "bdev_nvme_get_transport_statistics", 00:05:29.087 "bdev_nvme_apply_firmware", 00:05:29.087 "bdev_nvme_detach_controller", 00:05:29.087 "bdev_nvme_get_controllers", 00:05:29.087 "bdev_nvme_attach_controller", 00:05:29.087 "bdev_nvme_set_hotplug", 00:05:29.087 "bdev_nvme_set_options", 00:05:29.087 "bdev_passthru_delete", 00:05:29.087 "bdev_passthru_create", 00:05:29.087 "bdev_lvol_set_parent_bdev", 00:05:29.087 "bdev_lvol_set_parent", 00:05:29.087 "bdev_lvol_check_shallow_copy", 00:05:29.087 "bdev_lvol_start_shallow_copy", 00:05:29.087 "bdev_lvol_grow_lvstore", 00:05:29.087 "bdev_lvol_get_lvols", 00:05:29.087 "bdev_lvol_get_lvstores", 00:05:29.087 "bdev_lvol_delete", 00:05:29.087 "bdev_lvol_set_read_only", 00:05:29.087 "bdev_lvol_resize", 00:05:29.087 "bdev_lvol_decouple_parent", 00:05:29.087 "bdev_lvol_inflate", 00:05:29.087 "bdev_lvol_rename", 00:05:29.087 "bdev_lvol_clone_bdev", 00:05:29.087 "bdev_lvol_clone", 00:05:29.087 "bdev_lvol_snapshot", 00:05:29.087 "bdev_lvol_create", 00:05:29.087 "bdev_lvol_delete_lvstore", 00:05:29.087 "bdev_lvol_rename_lvstore", 00:05:29.087 "bdev_lvol_create_lvstore", 00:05:29.087 "bdev_raid_set_options", 00:05:29.087 "bdev_raid_remove_base_bdev", 00:05:29.087 "bdev_raid_add_base_bdev", 00:05:29.087 "bdev_raid_delete", 00:05:29.087 "bdev_raid_create", 00:05:29.087 "bdev_raid_get_bdevs", 00:05:29.087 "bdev_error_inject_error", 00:05:29.087 "bdev_error_delete", 00:05:29.087 "bdev_error_create", 00:05:29.087 "bdev_split_delete", 00:05:29.087 "bdev_split_create", 00:05:29.087 "bdev_delay_delete", 00:05:29.087 "bdev_delay_create", 00:05:29.087 "bdev_delay_update_latency", 00:05:29.087 "bdev_zone_block_delete", 00:05:29.087 "bdev_zone_block_create", 00:05:29.087 "blobfs_create", 00:05:29.087 "blobfs_detect", 00:05:29.087 "blobfs_set_cache_size", 00:05:29.087 "bdev_aio_delete", 00:05:29.087 "bdev_aio_rescan", 00:05:29.087 "bdev_aio_create", 00:05:29.087 "bdev_ftl_set_property", 00:05:29.087 "bdev_ftl_get_properties", 00:05:29.087 "bdev_ftl_get_stats", 00:05:29.087 "bdev_ftl_unmap", 00:05:29.087 "bdev_ftl_unload", 00:05:29.087 "bdev_ftl_delete", 00:05:29.087 "bdev_ftl_load", 00:05:29.087 "bdev_ftl_create", 00:05:29.087 "bdev_virtio_attach_controller", 00:05:29.087 "bdev_virtio_scsi_get_devices", 00:05:29.087 "bdev_virtio_detach_controller", 00:05:29.087 "bdev_virtio_blk_set_hotplug", 00:05:29.087 "bdev_iscsi_delete", 00:05:29.087 "bdev_iscsi_create", 00:05:29.087 "bdev_iscsi_set_options", 00:05:29.087 "accel_error_inject_error", 00:05:29.087 "ioat_scan_accel_module", 00:05:29.087 "dsa_scan_accel_module", 00:05:29.087 "iaa_scan_accel_module", 00:05:29.087 "keyring_file_remove_key", 00:05:29.087 "keyring_file_add_key", 00:05:29.087 "keyring_linux_set_options", 00:05:29.087 "fsdev_aio_delete", 00:05:29.087 "fsdev_aio_create", 00:05:29.087 "iscsi_get_histogram", 00:05:29.087 "iscsi_enable_histogram", 00:05:29.087 "iscsi_set_options", 00:05:29.087 "iscsi_get_auth_groups", 00:05:29.087 "iscsi_auth_group_remove_secret", 00:05:29.087 "iscsi_auth_group_add_secret", 00:05:29.087 "iscsi_delete_auth_group", 00:05:29.087 "iscsi_create_auth_group", 00:05:29.087 "iscsi_set_discovery_auth", 00:05:29.087 "iscsi_get_options", 00:05:29.087 "iscsi_target_node_request_logout", 00:05:29.087 "iscsi_target_node_set_redirect", 00:05:29.087 "iscsi_target_node_set_auth", 00:05:29.087 "iscsi_target_node_add_lun", 00:05:29.087 "iscsi_get_stats", 00:05:29.087 "iscsi_get_connections", 00:05:29.087 "iscsi_portal_group_set_auth", 00:05:29.087 "iscsi_start_portal_group", 00:05:29.087 "iscsi_delete_portal_group", 00:05:29.087 "iscsi_create_portal_group", 00:05:29.087 "iscsi_get_portal_groups", 00:05:29.087 "iscsi_delete_target_node", 00:05:29.087 "iscsi_target_node_remove_pg_ig_maps", 00:05:29.087 "iscsi_target_node_add_pg_ig_maps", 00:05:29.087 "iscsi_create_target_node", 00:05:29.087 "iscsi_get_target_nodes", 00:05:29.087 "iscsi_delete_initiator_group", 00:05:29.087 "iscsi_initiator_group_remove_initiators", 00:05:29.087 "iscsi_initiator_group_add_initiators", 00:05:29.087 "iscsi_create_initiator_group", 00:05:29.087 "iscsi_get_initiator_groups", 00:05:29.087 "nvmf_set_crdt", 00:05:29.087 "nvmf_set_config", 00:05:29.087 "nvmf_set_max_subsystems", 00:05:29.087 "nvmf_stop_mdns_prr", 00:05:29.087 "nvmf_publish_mdns_prr", 00:05:29.087 "nvmf_subsystem_get_listeners", 00:05:29.087 "nvmf_subsystem_get_qpairs", 00:05:29.087 "nvmf_subsystem_get_controllers", 00:05:29.087 "nvmf_get_stats", 00:05:29.087 "nvmf_get_transports", 00:05:29.087 "nvmf_create_transport", 00:05:29.087 "nvmf_get_targets", 00:05:29.087 "nvmf_delete_target", 00:05:29.087 "nvmf_create_target", 00:05:29.087 "nvmf_subsystem_allow_any_host", 00:05:29.087 "nvmf_subsystem_set_keys", 00:05:29.087 "nvmf_subsystem_remove_host", 00:05:29.087 "nvmf_subsystem_add_host", 00:05:29.087 "nvmf_ns_remove_host", 00:05:29.087 "nvmf_ns_add_host", 00:05:29.087 "nvmf_subsystem_remove_ns", 00:05:29.087 "nvmf_subsystem_set_ns_ana_group", 00:05:29.087 "nvmf_subsystem_add_ns", 00:05:29.087 "nvmf_subsystem_listener_set_ana_state", 00:05:29.087 "nvmf_discovery_get_referrals", 00:05:29.087 "nvmf_discovery_remove_referral", 00:05:29.087 "nvmf_discovery_add_referral", 00:05:29.087 "nvmf_subsystem_remove_listener", 00:05:29.087 "nvmf_subsystem_add_listener", 00:05:29.087 "nvmf_delete_subsystem", 00:05:29.087 "nvmf_create_subsystem", 00:05:29.087 "nvmf_get_subsystems", 00:05:29.087 "env_dpdk_get_mem_stats", 00:05:29.087 "nbd_get_disks", 00:05:29.087 "nbd_stop_disk", 00:05:29.087 "nbd_start_disk", 00:05:29.087 "ublk_recover_disk", 00:05:29.087 "ublk_get_disks", 00:05:29.087 "ublk_stop_disk", 00:05:29.087 "ublk_start_disk", 00:05:29.087 "ublk_destroy_target", 00:05:29.087 "ublk_create_target", 00:05:29.087 "virtio_blk_create_transport", 00:05:29.087 "virtio_blk_get_transports", 00:05:29.087 "vhost_controller_set_coalescing", 00:05:29.087 "vhost_get_controllers", 00:05:29.087 "vhost_delete_controller", 00:05:29.087 "vhost_create_blk_controller", 00:05:29.087 "vhost_scsi_controller_remove_target", 00:05:29.087 "vhost_scsi_controller_add_target", 00:05:29.087 "vhost_start_scsi_controller", 00:05:29.087 "vhost_create_scsi_controller", 00:05:29.087 "thread_set_cpumask", 00:05:29.087 "scheduler_set_options", 00:05:29.087 "framework_get_governor", 00:05:29.087 "framework_get_scheduler", 00:05:29.087 "framework_set_scheduler", 00:05:29.087 "framework_get_reactors", 00:05:29.087 "thread_get_io_channels", 00:05:29.087 "thread_get_pollers", 00:05:29.087 "thread_get_stats", 00:05:29.087 "framework_monitor_context_switch", 00:05:29.087 "spdk_kill_instance", 00:05:29.087 "log_enable_timestamps", 00:05:29.087 "log_get_flags", 00:05:29.087 "log_clear_flag", 00:05:29.087 "log_set_flag", 00:05:29.087 "log_get_level", 00:05:29.087 "log_set_level", 00:05:29.087 "log_get_print_level", 00:05:29.087 "log_set_print_level", 00:05:29.087 "framework_enable_cpumask_locks", 00:05:29.087 "framework_disable_cpumask_locks", 00:05:29.087 "framework_wait_init", 00:05:29.087 "framework_start_init", 00:05:29.087 "scsi_get_devices", 00:05:29.087 "bdev_get_histogram", 00:05:29.087 "bdev_enable_histogram", 00:05:29.087 "bdev_set_qos_limit", 00:05:29.087 "bdev_set_qd_sampling_period", 00:05:29.087 "bdev_get_bdevs", 00:05:29.087 "bdev_reset_iostat", 00:05:29.087 "bdev_get_iostat", 00:05:29.087 "bdev_examine", 00:05:29.087 "bdev_wait_for_examine", 00:05:29.087 "bdev_set_options", 00:05:29.087 "accel_get_stats", 00:05:29.087 "accel_set_options", 00:05:29.087 "accel_set_driver", 00:05:29.087 "accel_crypto_key_destroy", 00:05:29.087 "accel_crypto_keys_get", 00:05:29.087 "accel_crypto_key_create", 00:05:29.087 "accel_assign_opc", 00:05:29.087 "accel_get_module_info", 00:05:29.087 "accel_get_opc_assignments", 00:05:29.087 "vmd_rescan", 00:05:29.087 "vmd_remove_device", 00:05:29.087 "vmd_enable", 00:05:29.087 "sock_get_default_impl", 00:05:29.087 "sock_set_default_impl", 00:05:29.088 "sock_impl_set_options", 00:05:29.088 "sock_impl_get_options", 00:05:29.088 "iobuf_get_stats", 00:05:29.088 "iobuf_set_options", 00:05:29.088 "keyring_get_keys", 00:05:29.088 "framework_get_pci_devices", 00:05:29.088 "framework_get_config", 00:05:29.088 "framework_get_subsystems", 00:05:29.088 "fsdev_set_opts", 00:05:29.088 "fsdev_get_opts", 00:05:29.088 "trace_get_info", 00:05:29.088 "trace_get_tpoint_group_mask", 00:05:29.088 "trace_disable_tpoint_group", 00:05:29.088 "trace_enable_tpoint_group", 00:05:29.088 "trace_clear_tpoint_mask", 00:05:29.088 "trace_set_tpoint_mask", 00:05:29.088 "notify_get_notifications", 00:05:29.088 "notify_get_types", 00:05:29.088 "spdk_get_version", 00:05:29.088 "rpc_get_methods" 00:05:29.088 ] 00:05:29.088 11:33:54 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:29.088 11:33:54 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:29.088 11:33:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:29.088 11:33:54 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:29.088 11:33:54 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2823695 00:05:29.088 11:33:54 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2823695 ']' 00:05:29.088 11:33:54 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2823695 00:05:29.088 11:33:54 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:29.088 11:33:54 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.088 11:33:54 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2823695 00:05:29.346 11:33:54 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.346 11:33:54 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.346 11:33:54 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2823695' 00:05:29.346 killing process with pid 2823695 00:05:29.346 11:33:54 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2823695 00:05:29.346 11:33:54 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2823695 00:05:31.875 00:05:31.875 real 0m4.233s 00:05:31.875 user 0m7.652s 00:05:31.875 sys 0m0.738s 00:05:31.875 11:33:57 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.875 11:33:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:31.875 ************************************ 00:05:31.875 END TEST spdkcli_tcp 00:05:31.875 ************************************ 00:05:31.875 11:33:57 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:31.875 11:33:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.875 11:33:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.875 11:33:57 -- common/autotest_common.sh@10 -- # set +x 00:05:31.875 ************************************ 00:05:31.875 START TEST dpdk_mem_utility 00:05:31.875 ************************************ 00:05:31.875 11:33:57 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:31.875 * Looking for test storage... 00:05:31.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:31.875 11:33:57 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:31.875 11:33:57 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:31.875 11:33:57 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:31.875 11:33:57 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:31.875 11:33:57 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.875 11:33:57 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.875 11:33:57 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.875 11:33:57 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.875 11:33:57 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.875 11:33:57 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.875 11:33:57 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.875 11:33:57 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.875 11:33:57 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.875 11:33:57 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.875 11:33:57 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.875 11:33:57 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:31.875 11:33:57 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:31.875 11:33:57 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.875 11:33:57 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.875 11:33:57 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:31.875 11:33:57 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:31.876 11:33:57 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.876 11:33:57 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:31.876 11:33:57 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.876 11:33:57 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:31.876 11:33:57 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:31.876 11:33:57 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.876 11:33:57 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:31.876 11:33:57 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.876 11:33:57 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.876 11:33:57 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.876 11:33:57 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:31.876 11:33:57 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.876 11:33:57 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:31.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.876 --rc genhtml_branch_coverage=1 00:05:31.876 --rc genhtml_function_coverage=1 00:05:31.876 --rc genhtml_legend=1 00:05:31.876 --rc geninfo_all_blocks=1 00:05:31.876 --rc geninfo_unexecuted_blocks=1 00:05:31.876 00:05:31.876 ' 00:05:31.876 11:33:57 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:31.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.876 --rc genhtml_branch_coverage=1 00:05:31.876 --rc genhtml_function_coverage=1 00:05:31.876 --rc genhtml_legend=1 00:05:31.876 --rc geninfo_all_blocks=1 00:05:31.876 --rc geninfo_unexecuted_blocks=1 00:05:31.876 00:05:31.876 ' 00:05:31.876 11:33:57 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:31.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.876 --rc genhtml_branch_coverage=1 00:05:31.876 --rc genhtml_function_coverage=1 00:05:31.876 --rc genhtml_legend=1 00:05:31.876 --rc geninfo_all_blocks=1 00:05:31.876 --rc geninfo_unexecuted_blocks=1 00:05:31.876 00:05:31.876 ' 00:05:31.876 11:33:57 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:31.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.876 --rc genhtml_branch_coverage=1 00:05:31.876 --rc genhtml_function_coverage=1 00:05:31.876 --rc genhtml_legend=1 00:05:31.876 --rc geninfo_all_blocks=1 00:05:31.876 --rc geninfo_unexecuted_blocks=1 00:05:31.876 00:05:31.876 ' 00:05:31.876 11:33:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:31.876 11:33:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2824183 00:05:31.876 11:33:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.876 11:33:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2824183 00:05:31.876 11:33:57 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2824183 ']' 00:05:31.876 11:33:57 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.876 11:33:57 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.876 11:33:57 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.876 11:33:57 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.876 11:33:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:31.876 [2024-11-18 11:33:57.683072] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:31.876 [2024-11-18 11:33:57.683215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2824183 ] 00:05:32.134 [2024-11-18 11:33:57.835518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.134 [2024-11-18 11:33:57.974625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.070 11:33:58 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.070 11:33:58 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:33.070 11:33:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:33.070 11:33:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:33.070 11:33:58 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.070 11:33:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:33.070 { 00:05:33.070 "filename": "/tmp/spdk_mem_dump.txt" 00:05:33.070 } 00:05:33.070 11:33:58 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.070 11:33:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:33.329 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:33.329 1 heaps totaling size 816.000000 MiB 00:05:33.329 size: 816.000000 MiB heap id: 0 00:05:33.329 end heaps---------- 00:05:33.329 9 mempools totaling size 595.772034 MiB 00:05:33.329 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:33.329 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:33.329 size: 92.545471 MiB name: bdev_io_2824183 00:05:33.329 size: 50.003479 MiB name: msgpool_2824183 00:05:33.329 size: 36.509338 MiB name: fsdev_io_2824183 00:05:33.329 size: 21.763794 MiB name: PDU_Pool 00:05:33.329 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:33.329 size: 4.133484 MiB name: evtpool_2824183 00:05:33.329 size: 0.026123 MiB name: Session_Pool 00:05:33.329 end mempools------- 00:05:33.329 6 memzones totaling size 4.142822 MiB 00:05:33.329 size: 1.000366 MiB name: RG_ring_0_2824183 00:05:33.329 size: 1.000366 MiB name: RG_ring_1_2824183 00:05:33.329 size: 1.000366 MiB name: RG_ring_4_2824183 00:05:33.329 size: 1.000366 MiB name: RG_ring_5_2824183 00:05:33.329 size: 0.125366 MiB name: RG_ring_2_2824183 00:05:33.329 size: 0.015991 MiB name: RG_ring_3_2824183 00:05:33.329 end memzones------- 00:05:33.329 11:33:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:33.329 heap id: 0 total size: 816.000000 MiB number of busy elements: 44 number of free elements: 19 00:05:33.329 list of free elements. size: 16.857605 MiB 00:05:33.329 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:33.329 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:33.329 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:33.329 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:33.329 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:33.329 element at address: 0x200019200000 with size: 0.999329 MiB 00:05:33.329 element at address: 0x200000400000 with size: 0.998108 MiB 00:05:33.329 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:33.329 element at address: 0x200018a00000 with size: 0.959900 MiB 00:05:33.329 element at address: 0x200019500040 with size: 0.937256 MiB 00:05:33.329 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:33.329 element at address: 0x20001ac00000 with size: 0.583191 MiB 00:05:33.329 element at address: 0x200000c00000 with size: 0.495300 MiB 00:05:33.329 element at address: 0x200018e00000 with size: 0.491150 MiB 00:05:33.329 element at address: 0x200019600000 with size: 0.485657 MiB 00:05:33.329 element at address: 0x200012c00000 with size: 0.446167 MiB 00:05:33.329 element at address: 0x200028000000 with size: 0.411072 MiB 00:05:33.329 element at address: 0x200000800000 with size: 0.355286 MiB 00:05:33.329 element at address: 0x20000a5ff040 with size: 0.001038 MiB 00:05:33.329 list of standard malloc elements. size: 199.221497 MiB 00:05:33.329 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:33.329 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:33.329 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:33.329 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:33.329 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:33.329 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:33.329 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:33.329 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:33.329 element at address: 0x200012bff040 with size: 0.000427 MiB 00:05:33.329 element at address: 0x200012bffa00 with size: 0.000366 MiB 00:05:33.329 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:33.329 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:33.329 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:33.329 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:33.329 element at address: 0x2000004ffa40 with size: 0.000244 MiB 00:05:33.329 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:33.329 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:33.329 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:33.329 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:33.329 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:33.329 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:33.329 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:33.329 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:33.329 element at address: 0x20000a5ff480 with size: 0.000244 MiB 00:05:33.329 element at address: 0x20000a5ff580 with size: 0.000244 MiB 00:05:33.329 element at address: 0x20000a5ff680 with size: 0.000244 MiB 00:05:33.329 element at address: 0x20000a5ff780 with size: 0.000244 MiB 00:05:33.329 element at address: 0x20000a5ff880 with size: 0.000244 MiB 00:05:33.329 element at address: 0x20000a5ff980 with size: 0.000244 MiB 00:05:33.329 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:33.329 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:33.329 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:33.329 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:33.329 element at address: 0x200012bff200 with size: 0.000244 MiB 00:05:33.329 element at address: 0x200012bff300 with size: 0.000244 MiB 00:05:33.329 element at address: 0x200012bff400 with size: 0.000244 MiB 00:05:33.329 element at address: 0x200012bff500 with size: 0.000244 MiB 00:05:33.329 element at address: 0x200012bff600 with size: 0.000244 MiB 00:05:33.329 element at address: 0x200012bff700 with size: 0.000244 MiB 00:05:33.329 element at address: 0x200012bff800 with size: 0.000244 MiB 00:05:33.329 element at address: 0x200012bff900 with size: 0.000244 MiB 00:05:33.329 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:33.329 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:33.329 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:33.330 list of memzone associated elements. size: 599.920898 MiB 00:05:33.330 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:33.330 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:33.330 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:33.330 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:33.330 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:33.330 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2824183_0 00:05:33.330 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:33.330 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2824183_0 00:05:33.330 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:33.330 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2824183_0 00:05:33.330 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:33.330 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:33.330 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:33.330 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:33.330 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:33.330 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2824183_0 00:05:33.330 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:33.330 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2824183 00:05:33.330 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:33.330 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2824183 00:05:33.330 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:33.330 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:33.330 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:33.330 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:33.330 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:33.330 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:33.330 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:33.330 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:33.330 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:33.330 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2824183 00:05:33.330 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:33.330 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2824183 00:05:33.330 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:33.330 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2824183 00:05:33.330 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:33.330 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2824183 00:05:33.330 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:33.330 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2824183 00:05:33.330 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:33.330 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2824183 00:05:33.330 element at address: 0x200018e7dbc0 with size: 0.500549 MiB 00:05:33.330 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:33.330 element at address: 0x200012c72380 with size: 0.500549 MiB 00:05:33.330 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:33.330 element at address: 0x20001967c540 with size: 0.250549 MiB 00:05:33.330 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:33.330 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:33.330 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2824183 00:05:33.330 element at address: 0x20000085f180 with size: 0.125549 MiB 00:05:33.330 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2824183 00:05:33.330 element at address: 0x200018af5bc0 with size: 0.031799 MiB 00:05:33.330 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:33.330 element at address: 0x2000280693c0 with size: 0.023804 MiB 00:05:33.330 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:33.330 element at address: 0x20000085af40 with size: 0.016174 MiB 00:05:33.330 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2824183 00:05:33.330 element at address: 0x20002806f540 with size: 0.002502 MiB 00:05:33.330 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:33.330 element at address: 0x2000004ffb40 with size: 0.000366 MiB 00:05:33.330 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2824183 00:05:33.330 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:33.330 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2824183 00:05:33.330 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:33.330 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2824183 00:05:33.330 element at address: 0x20000a5ffa80 with size: 0.000366 MiB 00:05:33.330 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:33.330 11:33:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:33.330 11:33:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2824183 00:05:33.330 11:33:59 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2824183 ']' 00:05:33.330 11:33:59 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2824183 00:05:33.330 11:33:59 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:33.330 11:33:59 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.330 11:33:59 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2824183 00:05:33.330 11:33:59 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.330 11:33:59 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.330 11:33:59 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2824183' 00:05:33.330 killing process with pid 2824183 00:05:33.330 11:33:59 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2824183 00:05:33.330 11:33:59 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2824183 00:05:35.858 00:05:35.858 real 0m4.045s 00:05:35.858 user 0m4.086s 00:05:35.858 sys 0m0.661s 00:05:35.858 11:34:01 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.858 11:34:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:35.858 ************************************ 00:05:35.858 END TEST dpdk_mem_utility 00:05:35.858 ************************************ 00:05:35.858 11:34:01 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:35.858 11:34:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.858 11:34:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.858 11:34:01 -- common/autotest_common.sh@10 -- # set +x 00:05:35.858 ************************************ 00:05:35.858 START TEST event 00:05:35.858 ************************************ 00:05:35.858 11:34:01 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:35.858 * Looking for test storage... 00:05:35.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:35.858 11:34:01 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:35.858 11:34:01 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:35.858 11:34:01 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:35.858 11:34:01 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:35.858 11:34:01 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.858 11:34:01 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.858 11:34:01 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.858 11:34:01 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.858 11:34:01 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.858 11:34:01 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.858 11:34:01 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.858 11:34:01 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.858 11:34:01 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.858 11:34:01 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.858 11:34:01 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.858 11:34:01 event -- scripts/common.sh@344 -- # case "$op" in 00:05:35.858 11:34:01 event -- scripts/common.sh@345 -- # : 1 00:05:35.858 11:34:01 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.858 11:34:01 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.858 11:34:01 event -- scripts/common.sh@365 -- # decimal 1 00:05:35.858 11:34:01 event -- scripts/common.sh@353 -- # local d=1 00:05:35.858 11:34:01 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.858 11:34:01 event -- scripts/common.sh@355 -- # echo 1 00:05:35.858 11:34:01 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.858 11:34:01 event -- scripts/common.sh@366 -- # decimal 2 00:05:35.858 11:34:01 event -- scripts/common.sh@353 -- # local d=2 00:05:35.858 11:34:01 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.858 11:34:01 event -- scripts/common.sh@355 -- # echo 2 00:05:35.858 11:34:01 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.858 11:34:01 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.858 11:34:01 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.858 11:34:01 event -- scripts/common.sh@368 -- # return 0 00:05:35.858 11:34:01 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.858 11:34:01 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:35.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.859 --rc genhtml_branch_coverage=1 00:05:35.859 --rc genhtml_function_coverage=1 00:05:35.859 --rc genhtml_legend=1 00:05:35.859 --rc geninfo_all_blocks=1 00:05:35.859 --rc geninfo_unexecuted_blocks=1 00:05:35.859 00:05:35.859 ' 00:05:35.859 11:34:01 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:35.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.859 --rc genhtml_branch_coverage=1 00:05:35.859 --rc genhtml_function_coverage=1 00:05:35.859 --rc genhtml_legend=1 00:05:35.859 --rc geninfo_all_blocks=1 00:05:35.859 --rc geninfo_unexecuted_blocks=1 00:05:35.859 00:05:35.859 ' 00:05:35.859 11:34:01 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:35.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.859 --rc genhtml_branch_coverage=1 00:05:35.859 --rc genhtml_function_coverage=1 00:05:35.859 --rc genhtml_legend=1 00:05:35.859 --rc geninfo_all_blocks=1 00:05:35.859 --rc geninfo_unexecuted_blocks=1 00:05:35.859 00:05:35.859 ' 00:05:35.859 11:34:01 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:35.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.859 --rc genhtml_branch_coverage=1 00:05:35.859 --rc genhtml_function_coverage=1 00:05:35.859 --rc genhtml_legend=1 00:05:35.859 --rc geninfo_all_blocks=1 00:05:35.859 --rc geninfo_unexecuted_blocks=1 00:05:35.859 00:05:35.859 ' 00:05:35.859 11:34:01 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:35.859 11:34:01 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:35.859 11:34:01 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:35.859 11:34:01 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:35.859 11:34:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.859 11:34:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.859 ************************************ 00:05:35.859 START TEST event_perf 00:05:35.859 ************************************ 00:05:35.859 11:34:01 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:35.859 Running I/O for 1 seconds...[2024-11-18 11:34:01.740257] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:35.859 [2024-11-18 11:34:01.740359] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2824785 ] 00:05:36.117 [2024-11-18 11:34:01.886023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:36.375 [2024-11-18 11:34:02.033027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.375 [2024-11-18 11:34:02.033090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.375 [2024-11-18 11:34:02.033184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.375 [2024-11-18 11:34:02.033209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:37.747 Running I/O for 1 seconds... 00:05:37.747 lcore 0: 223217 00:05:37.747 lcore 1: 223217 00:05:37.747 lcore 2: 223217 00:05:37.747 lcore 3: 223217 00:05:37.747 done. 00:05:37.747 00:05:37.748 real 0m1.596s 00:05:37.748 user 0m4.421s 00:05:37.748 sys 0m0.157s 00:05:37.748 11:34:03 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.748 11:34:03 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:37.748 ************************************ 00:05:37.748 END TEST event_perf 00:05:37.748 ************************************ 00:05:37.748 11:34:03 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:37.748 11:34:03 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:37.748 11:34:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.748 11:34:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:37.748 ************************************ 00:05:37.748 START TEST event_reactor 00:05:37.748 ************************************ 00:05:37.748 11:34:03 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:37.748 [2024-11-18 11:34:03.384745] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:37.748 [2024-11-18 11:34:03.384895] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2825021 ] 00:05:37.748 [2024-11-18 11:34:03.530118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.006 [2024-11-18 11:34:03.669896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.380 test_start 00:05:39.380 oneshot 00:05:39.380 tick 100 00:05:39.380 tick 100 00:05:39.380 tick 250 00:05:39.380 tick 100 00:05:39.380 tick 100 00:05:39.380 tick 100 00:05:39.380 tick 250 00:05:39.380 tick 500 00:05:39.380 tick 100 00:05:39.380 tick 100 00:05:39.380 tick 250 00:05:39.380 tick 100 00:05:39.380 tick 100 00:05:39.380 test_end 00:05:39.380 00:05:39.380 real 0m1.580s 00:05:39.380 user 0m1.416s 00:05:39.380 sys 0m0.155s 00:05:39.380 11:34:04 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.380 11:34:04 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:39.380 ************************************ 00:05:39.380 END TEST event_reactor 00:05:39.380 ************************************ 00:05:39.380 11:34:04 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:39.380 11:34:04 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:39.380 11:34:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.380 11:34:04 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.380 ************************************ 00:05:39.380 START TEST event_reactor_perf 00:05:39.380 ************************************ 00:05:39.380 11:34:04 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:39.380 [2024-11-18 11:34:05.011635] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:39.380 [2024-11-18 11:34:05.011743] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2825227 ] 00:05:39.380 [2024-11-18 11:34:05.153574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.638 [2024-11-18 11:34:05.291790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.011 test_start 00:05:41.011 test_end 00:05:41.011 Performance: 267075 events per second 00:05:41.011 00:05:41.011 real 0m1.574s 00:05:41.011 user 0m1.411s 00:05:41.011 sys 0m0.154s 00:05:41.011 11:34:06 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.011 11:34:06 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:41.011 ************************************ 00:05:41.011 END TEST event_reactor_perf 00:05:41.011 ************************************ 00:05:41.011 11:34:06 event -- event/event.sh@49 -- # uname -s 00:05:41.011 11:34:06 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:41.011 11:34:06 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:41.011 11:34:06 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.011 11:34:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.011 11:34:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.011 ************************************ 00:05:41.011 START TEST event_scheduler 00:05:41.011 ************************************ 00:05:41.011 11:34:06 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:41.011 * Looking for test storage... 00:05:41.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:41.011 11:34:06 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:41.011 11:34:06 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:41.011 11:34:06 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:41.011 11:34:06 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:41.011 11:34:06 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.011 11:34:06 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.011 11:34:06 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.011 11:34:06 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.011 11:34:06 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.011 11:34:06 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.011 11:34:06 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.011 11:34:06 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.011 11:34:06 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.011 11:34:06 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.011 11:34:06 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.011 11:34:06 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:41.011 11:34:06 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:41.011 11:34:06 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.011 11:34:06 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.011 11:34:06 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:41.011 11:34:06 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:41.011 11:34:06 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.011 11:34:06 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:41.011 11:34:06 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.011 11:34:06 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:41.011 11:34:06 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:41.011 11:34:06 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.011 11:34:06 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:41.011 11:34:06 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.011 11:34:06 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.011 11:34:06 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.011 11:34:06 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:41.011 11:34:06 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.011 11:34:06 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:41.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.011 --rc genhtml_branch_coverage=1 00:05:41.011 --rc genhtml_function_coverage=1 00:05:41.011 --rc genhtml_legend=1 00:05:41.011 --rc geninfo_all_blocks=1 00:05:41.011 --rc geninfo_unexecuted_blocks=1 00:05:41.011 00:05:41.011 ' 00:05:41.011 11:34:06 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:41.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.011 --rc genhtml_branch_coverage=1 00:05:41.011 --rc genhtml_function_coverage=1 00:05:41.011 --rc genhtml_legend=1 00:05:41.011 --rc geninfo_all_blocks=1 00:05:41.011 --rc geninfo_unexecuted_blocks=1 00:05:41.011 00:05:41.011 ' 00:05:41.011 11:34:06 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:41.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.011 --rc genhtml_branch_coverage=1 00:05:41.011 --rc genhtml_function_coverage=1 00:05:41.011 --rc genhtml_legend=1 00:05:41.011 --rc geninfo_all_blocks=1 00:05:41.011 --rc geninfo_unexecuted_blocks=1 00:05:41.011 00:05:41.011 ' 00:05:41.011 11:34:06 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:41.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.011 --rc genhtml_branch_coverage=1 00:05:41.011 --rc genhtml_function_coverage=1 00:05:41.011 --rc genhtml_legend=1 00:05:41.011 --rc geninfo_all_blocks=1 00:05:41.011 --rc geninfo_unexecuted_blocks=1 00:05:41.011 00:05:41.011 ' 00:05:41.011 11:34:06 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:41.011 11:34:06 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2825539 00:05:41.011 11:34:06 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:41.011 11:34:06 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.011 11:34:06 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2825539 00:05:41.011 11:34:06 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2825539 ']' 00:05:41.011 11:34:06 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.011 11:34:06 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.011 11:34:06 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.011 11:34:06 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.011 11:34:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:41.012 [2024-11-18 11:34:06.842251] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:41.012 [2024-11-18 11:34:06.842417] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2825539 ] 00:05:41.270 [2024-11-18 11:34:06.980305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:41.270 [2024-11-18 11:34:07.101098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.270 [2024-11-18 11:34:07.101162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.270 [2024-11-18 11:34:07.101208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.270 [2024-11-18 11:34:07.101233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:42.203 11:34:07 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.203 11:34:07 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:42.203 11:34:07 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:42.203 11:34:07 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.203 11:34:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.203 [2024-11-18 11:34:07.776180] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:42.203 [2024-11-18 11:34:07.776221] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:42.203 [2024-11-18 11:34:07.776278] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:42.203 [2024-11-18 11:34:07.776299] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:42.203 [2024-11-18 11:34:07.776320] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:42.203 11:34:07 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.203 11:34:07 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:42.203 11:34:07 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.203 11:34:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.203 [2024-11-18 11:34:08.086194] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:42.203 11:34:08 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.203 11:34:08 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:42.203 11:34:08 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.203 11:34:08 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.203 11:34:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.461 ************************************ 00:05:42.461 START TEST scheduler_create_thread 00:05:42.461 ************************************ 00:05:42.461 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:42.461 11:34:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:42.461 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.461 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.461 2 00:05:42.461 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.461 11:34:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:42.461 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.461 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.461 3 00:05:42.461 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.461 11:34:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:42.461 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.461 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.461 4 00:05:42.461 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.461 11:34:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:42.461 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.461 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.461 5 00:05:42.461 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.461 11:34:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:42.461 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.461 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.461 6 00:05:42.461 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.461 11:34:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:42.461 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.461 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.461 7 00:05:42.461 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.461 11:34:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:42.461 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.461 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.461 8 00:05:42.461 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.461 11:34:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:42.461 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.461 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.461 9 00:05:42.461 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.461 11:34:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:42.461 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.461 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.461 10 00:05:42.461 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.462 11:34:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:42.462 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.462 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.462 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.462 11:34:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:42.462 11:34:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:42.462 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.462 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.462 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.462 11:34:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:42.462 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.462 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.462 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.462 11:34:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:42.462 11:34:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:42.462 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.462 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.462 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.462 00:05:42.462 real 0m0.113s 00:05:42.462 user 0m0.014s 00:05:42.462 sys 0m0.000s 00:05:42.462 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.462 11:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.462 ************************************ 00:05:42.462 END TEST scheduler_create_thread 00:05:42.462 ************************************ 00:05:42.462 11:34:08 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:42.462 11:34:08 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2825539 00:05:42.462 11:34:08 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2825539 ']' 00:05:42.462 11:34:08 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2825539 00:05:42.462 11:34:08 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:42.462 11:34:08 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.462 11:34:08 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2825539 00:05:42.462 11:34:08 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:42.462 11:34:08 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:42.462 11:34:08 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2825539' 00:05:42.462 killing process with pid 2825539 00:05:42.462 11:34:08 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2825539 00:05:42.462 11:34:08 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2825539 00:05:43.028 [2024-11-18 11:34:08.713467] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:43.962 00:05:43.962 real 0m3.077s 00:05:43.962 user 0m5.261s 00:05:43.962 sys 0m0.503s 00:05:43.962 11:34:09 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.962 11:34:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:43.962 ************************************ 00:05:43.962 END TEST event_scheduler 00:05:43.962 ************************************ 00:05:43.962 11:34:09 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:43.962 11:34:09 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:43.962 11:34:09 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.962 11:34:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.962 11:34:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:43.962 ************************************ 00:05:43.962 START TEST app_repeat 00:05:43.962 ************************************ 00:05:43.962 11:34:09 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:43.962 11:34:09 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.962 11:34:09 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.962 11:34:09 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:43.962 11:34:09 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.962 11:34:09 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:43.962 11:34:09 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:43.962 11:34:09 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:43.962 11:34:09 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2825864 00:05:43.962 11:34:09 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:43.962 11:34:09 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.962 11:34:09 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2825864' 00:05:43.962 Process app_repeat pid: 2825864 00:05:43.962 11:34:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:43.962 11:34:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:43.962 spdk_app_start Round 0 00:05:43.962 11:34:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2825864 /var/tmp/spdk-nbd.sock 00:05:43.962 11:34:09 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2825864 ']' 00:05:43.962 11:34:09 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:43.962 11:34:09 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.962 11:34:09 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:43.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:43.962 11:34:09 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.962 11:34:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:43.962 [2024-11-18 11:34:09.790184] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:43.962 [2024-11-18 11:34:09.790310] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2825864 ] 00:05:44.221 [2024-11-18 11:34:09.935631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.221 [2024-11-18 11:34:10.080989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.221 [2024-11-18 11:34:10.080993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.155 11:34:10 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.155 11:34:10 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:45.155 11:34:10 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.414 Malloc0 00:05:45.414 11:34:11 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.672 Malloc1 00:05:45.672 11:34:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.672 11:34:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.672 11:34:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.672 11:34:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:45.672 11:34:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.672 11:34:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:45.672 11:34:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.672 11:34:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.672 11:34:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.672 11:34:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:45.672 11:34:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.672 11:34:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:45.672 11:34:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:45.672 11:34:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:45.672 11:34:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.672 11:34:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:45.930 /dev/nbd0 00:05:45.930 11:34:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:46.188 11:34:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:46.188 11:34:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:46.188 11:34:11 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:46.188 11:34:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:46.188 11:34:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:46.188 11:34:11 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:46.188 11:34:11 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:46.188 11:34:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:46.188 11:34:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:46.188 11:34:11 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.188 1+0 records in 00:05:46.188 1+0 records out 00:05:46.188 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230086 s, 17.8 MB/s 00:05:46.188 11:34:11 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.188 11:34:11 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:46.188 11:34:11 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.188 11:34:11 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:46.188 11:34:11 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:46.188 11:34:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.188 11:34:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.188 11:34:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:46.446 /dev/nbd1 00:05:46.446 11:34:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:46.446 11:34:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:46.446 11:34:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:46.446 11:34:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:46.446 11:34:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:46.446 11:34:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:46.446 11:34:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:46.446 11:34:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:46.446 11:34:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:46.446 11:34:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:46.446 11:34:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.446 1+0 records in 00:05:46.446 1+0 records out 00:05:46.446 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234407 s, 17.5 MB/s 00:05:46.446 11:34:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.446 11:34:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:46.446 11:34:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.446 11:34:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:46.446 11:34:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:46.446 11:34:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.446 11:34:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.446 11:34:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.446 11:34:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.446 11:34:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.704 11:34:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:46.704 { 00:05:46.704 "nbd_device": "/dev/nbd0", 00:05:46.704 "bdev_name": "Malloc0" 00:05:46.704 }, 00:05:46.704 { 00:05:46.704 "nbd_device": "/dev/nbd1", 00:05:46.704 "bdev_name": "Malloc1" 00:05:46.704 } 00:05:46.704 ]' 00:05:46.704 11:34:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:46.704 { 00:05:46.704 "nbd_device": "/dev/nbd0", 00:05:46.704 "bdev_name": "Malloc0" 00:05:46.704 }, 00:05:46.704 { 00:05:46.704 "nbd_device": "/dev/nbd1", 00:05:46.704 "bdev_name": "Malloc1" 00:05:46.704 } 00:05:46.704 ]' 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:46.705 /dev/nbd1' 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:46.705 /dev/nbd1' 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:46.705 256+0 records in 00:05:46.705 256+0 records out 00:05:46.705 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0050459 s, 208 MB/s 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:46.705 256+0 records in 00:05:46.705 256+0 records out 00:05:46.705 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245888 s, 42.6 MB/s 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:46.705 256+0 records in 00:05:46.705 256+0 records out 00:05:46.705 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0291719 s, 35.9 MB/s 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.705 11:34:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:46.963 11:34:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:47.221 11:34:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:47.221 11:34:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:47.221 11:34:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.221 11:34:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.221 11:34:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:47.221 11:34:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.221 11:34:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.221 11:34:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.221 11:34:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:47.479 11:34:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:47.479 11:34:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:47.479 11:34:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:47.479 11:34:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.479 11:34:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.479 11:34:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:47.479 11:34:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.479 11:34:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.479 11:34:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.479 11:34:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.479 11:34:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.737 11:34:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:47.737 11:34:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:47.737 11:34:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.737 11:34:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:47.737 11:34:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:47.737 11:34:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.737 11:34:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:47.737 11:34:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:47.737 11:34:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:47.737 11:34:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:47.737 11:34:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:47.737 11:34:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:47.737 11:34:13 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:48.303 11:34:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:49.237 [2024-11-18 11:34:15.116543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.495 [2024-11-18 11:34:15.251099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.495 [2024-11-18 11:34:15.251100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.753 [2024-11-18 11:34:15.467849] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:49.753 [2024-11-18 11:34:15.467937] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:51.125 11:34:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:51.125 11:34:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:51.125 spdk_app_start Round 1 00:05:51.125 11:34:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2825864 /var/tmp/spdk-nbd.sock 00:05:51.125 11:34:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2825864 ']' 00:05:51.125 11:34:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:51.125 11:34:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.125 11:34:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:51.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:51.125 11:34:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.125 11:34:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.383 11:34:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.383 11:34:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:51.383 11:34:17 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.641 Malloc0 00:05:51.641 11:34:17 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:52.208 Malloc1 00:05:52.208 11:34:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.208 11:34:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.208 11:34:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.208 11:34:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:52.208 11:34:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.208 11:34:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:52.208 11:34:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.208 11:34:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.208 11:34:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.208 11:34:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:52.208 11:34:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.208 11:34:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:52.208 11:34:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:52.208 11:34:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:52.208 11:34:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.208 11:34:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:52.466 /dev/nbd0 00:05:52.466 11:34:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:52.466 11:34:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:52.466 11:34:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:52.466 11:34:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:52.466 11:34:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:52.466 11:34:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:52.466 11:34:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:52.466 11:34:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:52.466 11:34:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:52.466 11:34:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:52.466 11:34:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.466 1+0 records in 00:05:52.466 1+0 records out 00:05:52.466 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225216 s, 18.2 MB/s 00:05:52.466 11:34:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:52.466 11:34:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:52.466 11:34:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:52.466 11:34:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:52.466 11:34:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:52.466 11:34:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.466 11:34:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.466 11:34:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:52.724 /dev/nbd1 00:05:52.724 11:34:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:52.724 11:34:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:52.724 11:34:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:52.724 11:34:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:52.724 11:34:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:52.724 11:34:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:52.724 11:34:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:52.724 11:34:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:52.724 11:34:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:52.724 11:34:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:52.724 11:34:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.724 1+0 records in 00:05:52.724 1+0 records out 00:05:52.724 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024078 s, 17.0 MB/s 00:05:52.724 11:34:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:52.724 11:34:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:52.724 11:34:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:52.724 11:34:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:52.724 11:34:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:52.724 11:34:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.724 11:34:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.724 11:34:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.724 11:34:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.724 11:34:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.017 11:34:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:53.017 { 00:05:53.017 "nbd_device": "/dev/nbd0", 00:05:53.017 "bdev_name": "Malloc0" 00:05:53.017 }, 00:05:53.017 { 00:05:53.017 "nbd_device": "/dev/nbd1", 00:05:53.017 "bdev_name": "Malloc1" 00:05:53.017 } 00:05:53.017 ]' 00:05:53.017 11:34:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:53.017 { 00:05:53.017 "nbd_device": "/dev/nbd0", 00:05:53.017 "bdev_name": "Malloc0" 00:05:53.017 }, 00:05:53.017 { 00:05:53.017 "nbd_device": "/dev/nbd1", 00:05:53.017 "bdev_name": "Malloc1" 00:05:53.017 } 00:05:53.017 ]' 00:05:53.017 11:34:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.017 11:34:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:53.017 /dev/nbd1' 00:05:53.017 11:34:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:53.017 /dev/nbd1' 00:05:53.017 11:34:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.017 11:34:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:53.017 11:34:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:53.017 11:34:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:53.017 11:34:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:53.018 11:34:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:53.018 11:34:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.018 11:34:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.018 11:34:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:53.018 11:34:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:53.018 11:34:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:53.018 11:34:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:53.018 256+0 records in 00:05:53.018 256+0 records out 00:05:53.018 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00513627 s, 204 MB/s 00:05:53.018 11:34:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.018 11:34:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:53.018 256+0 records in 00:05:53.018 256+0 records out 00:05:53.018 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02445 s, 42.9 MB/s 00:05:53.018 11:34:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.018 11:34:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:53.018 256+0 records in 00:05:53.018 256+0 records out 00:05:53.018 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299008 s, 35.1 MB/s 00:05:53.018 11:34:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:53.018 11:34:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.018 11:34:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.018 11:34:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:53.018 11:34:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:53.018 11:34:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:53.018 11:34:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:53.018 11:34:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.018 11:34:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:53.300 11:34:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.300 11:34:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:53.300 11:34:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:53.300 11:34:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:53.300 11:34:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.300 11:34:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.300 11:34:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:53.300 11:34:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:53.300 11:34:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.300 11:34:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:53.557 11:34:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:53.557 11:34:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:53.557 11:34:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:53.557 11:34:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.557 11:34:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.557 11:34:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:53.557 11:34:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.557 11:34:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.557 11:34:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.557 11:34:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:53.814 11:34:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:53.814 11:34:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:53.814 11:34:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:53.814 11:34:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.814 11:34:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.814 11:34:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:53.814 11:34:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.814 11:34:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.814 11:34:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.814 11:34:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.814 11:34:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.072 11:34:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:54.072 11:34:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:54.072 11:34:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.072 11:34:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:54.072 11:34:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:54.072 11:34:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.072 11:34:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:54.072 11:34:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:54.072 11:34:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:54.072 11:34:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:54.072 11:34:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:54.072 11:34:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:54.072 11:34:19 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:54.639 11:34:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:56.014 [2024-11-18 11:34:21.490364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:56.014 [2024-11-18 11:34:21.625231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.014 [2024-11-18 11:34:21.625233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.014 [2024-11-18 11:34:21.840628] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:56.014 [2024-11-18 11:34:21.840721] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:57.912 11:34:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:57.912 11:34:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:57.912 spdk_app_start Round 2 00:05:57.912 11:34:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2825864 /var/tmp/spdk-nbd.sock 00:05:57.912 11:34:23 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2825864 ']' 00:05:57.912 11:34:23 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:57.912 11:34:23 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.912 11:34:23 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:57.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:57.912 11:34:23 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.912 11:34:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:57.912 11:34:23 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.912 11:34:23 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:57.912 11:34:23 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.170 Malloc0 00:05:58.170 11:34:23 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.429 Malloc1 00:05:58.429 11:34:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.429 11:34:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.429 11:34:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.429 11:34:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:58.429 11:34:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.429 11:34:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:58.429 11:34:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.429 11:34:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.429 11:34:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.429 11:34:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:58.429 11:34:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.429 11:34:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:58.429 11:34:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:58.429 11:34:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:58.429 11:34:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.429 11:34:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:58.687 /dev/nbd0 00:05:58.687 11:34:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:58.687 11:34:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:58.687 11:34:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:58.687 11:34:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:58.687 11:34:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:58.687 11:34:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:58.687 11:34:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:58.687 11:34:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:58.687 11:34:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:58.687 11:34:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:58.687 11:34:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.687 1+0 records in 00:05:58.687 1+0 records out 00:05:58.687 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261854 s, 15.6 MB/s 00:05:58.687 11:34:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.687 11:34:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:58.687 11:34:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.687 11:34:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:58.687 11:34:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:58.687 11:34:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.687 11:34:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.687 11:34:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:59.253 /dev/nbd1 00:05:59.253 11:34:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:59.253 11:34:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:59.253 11:34:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:59.253 11:34:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:59.253 11:34:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:59.253 11:34:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:59.253 11:34:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:59.253 11:34:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:59.253 11:34:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:59.253 11:34:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:59.253 11:34:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.253 1+0 records in 00:05:59.253 1+0 records out 00:05:59.253 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020711 s, 19.8 MB/s 00:05:59.253 11:34:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:59.253 11:34:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:59.253 11:34:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:59.253 11:34:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:59.253 11:34:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:59.253 11:34:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.253 11:34:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.253 11:34:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.253 11:34:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.253 11:34:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:59.512 { 00:05:59.512 "nbd_device": "/dev/nbd0", 00:05:59.512 "bdev_name": "Malloc0" 00:05:59.512 }, 00:05:59.512 { 00:05:59.512 "nbd_device": "/dev/nbd1", 00:05:59.512 "bdev_name": "Malloc1" 00:05:59.512 } 00:05:59.512 ]' 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:59.512 { 00:05:59.512 "nbd_device": "/dev/nbd0", 00:05:59.512 "bdev_name": "Malloc0" 00:05:59.512 }, 00:05:59.512 { 00:05:59.512 "nbd_device": "/dev/nbd1", 00:05:59.512 "bdev_name": "Malloc1" 00:05:59.512 } 00:05:59.512 ]' 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:59.512 /dev/nbd1' 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:59.512 /dev/nbd1' 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:59.512 256+0 records in 00:05:59.512 256+0 records out 00:05:59.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00512765 s, 204 MB/s 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:59.512 256+0 records in 00:05:59.512 256+0 records out 00:05:59.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243096 s, 43.1 MB/s 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:59.512 256+0 records in 00:05:59.512 256+0 records out 00:05:59.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0290067 s, 36.1 MB/s 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.512 11:34:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:59.771 11:34:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:59.771 11:34:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:59.771 11:34:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:59.771 11:34:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.771 11:34:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.771 11:34:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:59.771 11:34:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:59.771 11:34:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.771 11:34:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.771 11:34:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:00.030 11:34:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:00.288 11:34:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:00.288 11:34:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:00.288 11:34:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.288 11:34:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.288 11:34:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:00.288 11:34:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.288 11:34:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.288 11:34:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.288 11:34:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.288 11:34:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.546 11:34:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:00.546 11:34:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:00.546 11:34:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.546 11:34:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:00.546 11:34:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:00.546 11:34:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.546 11:34:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:00.546 11:34:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:00.546 11:34:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:00.546 11:34:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:00.546 11:34:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:00.546 11:34:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:00.546 11:34:26 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:00.804 11:34:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:02.178 [2024-11-18 11:34:27.867168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:02.178 [2024-11-18 11:34:28.003616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.178 [2024-11-18 11:34:28.003618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.435 [2024-11-18 11:34:28.211104] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:02.435 [2024-11-18 11:34:28.211214] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:03.807 11:34:29 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2825864 /var/tmp/spdk-nbd.sock 00:06:03.807 11:34:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2825864 ']' 00:06:03.807 11:34:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:03.807 11:34:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.807 11:34:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:03.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:03.807 11:34:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.807 11:34:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.065 11:34:29 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.065 11:34:29 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:04.065 11:34:29 event.app_repeat -- event/event.sh@39 -- # killprocess 2825864 00:06:04.065 11:34:29 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2825864 ']' 00:06:04.065 11:34:29 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2825864 00:06:04.065 11:34:29 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:04.065 11:34:29 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.323 11:34:29 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2825864 00:06:04.323 11:34:29 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.323 11:34:29 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.323 11:34:29 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2825864' 00:06:04.323 killing process with pid 2825864 00:06:04.323 11:34:29 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2825864 00:06:04.323 11:34:29 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2825864 00:06:05.258 spdk_app_start is called in Round 0. 00:06:05.258 Shutdown signal received, stop current app iteration 00:06:05.258 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:06:05.258 spdk_app_start is called in Round 1. 00:06:05.258 Shutdown signal received, stop current app iteration 00:06:05.258 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:06:05.258 spdk_app_start is called in Round 2. 00:06:05.258 Shutdown signal received, stop current app iteration 00:06:05.258 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:06:05.258 spdk_app_start is called in Round 3. 00:06:05.258 Shutdown signal received, stop current app iteration 00:06:05.258 11:34:31 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:05.258 11:34:31 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:05.258 00:06:05.258 real 0m21.297s 00:06:05.258 user 0m45.274s 00:06:05.258 sys 0m3.367s 00:06:05.258 11:34:31 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.258 11:34:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:05.258 ************************************ 00:06:05.258 END TEST app_repeat 00:06:05.258 ************************************ 00:06:05.258 11:34:31 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:05.258 11:34:31 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:05.258 11:34:31 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.258 11:34:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.258 11:34:31 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.258 ************************************ 00:06:05.258 START TEST cpu_locks 00:06:05.258 ************************************ 00:06:05.258 11:34:31 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:05.258 * Looking for test storage... 00:06:05.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:05.258 11:34:31 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:05.258 11:34:31 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:05.258 11:34:31 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:05.516 11:34:31 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:05.516 11:34:31 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.516 11:34:31 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.516 11:34:31 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.516 11:34:31 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.516 11:34:31 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.516 11:34:31 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.516 11:34:31 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.516 11:34:31 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.516 11:34:31 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.516 11:34:31 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.516 11:34:31 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.516 11:34:31 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:05.516 11:34:31 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:05.516 11:34:31 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.516 11:34:31 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.516 11:34:31 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:05.516 11:34:31 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:05.516 11:34:31 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.516 11:34:31 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:05.516 11:34:31 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.516 11:34:31 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:05.516 11:34:31 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:05.516 11:34:31 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.516 11:34:31 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:05.516 11:34:31 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.516 11:34:31 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.516 11:34:31 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.516 11:34:31 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:05.516 11:34:31 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.516 11:34:31 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:05.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.516 --rc genhtml_branch_coverage=1 00:06:05.516 --rc genhtml_function_coverage=1 00:06:05.516 --rc genhtml_legend=1 00:06:05.516 --rc geninfo_all_blocks=1 00:06:05.516 --rc geninfo_unexecuted_blocks=1 00:06:05.516 00:06:05.516 ' 00:06:05.516 11:34:31 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:05.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.516 --rc genhtml_branch_coverage=1 00:06:05.516 --rc genhtml_function_coverage=1 00:06:05.516 --rc genhtml_legend=1 00:06:05.516 --rc geninfo_all_blocks=1 00:06:05.516 --rc geninfo_unexecuted_blocks=1 00:06:05.516 00:06:05.516 ' 00:06:05.516 11:34:31 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:05.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.516 --rc genhtml_branch_coverage=1 00:06:05.516 --rc genhtml_function_coverage=1 00:06:05.516 --rc genhtml_legend=1 00:06:05.516 --rc geninfo_all_blocks=1 00:06:05.516 --rc geninfo_unexecuted_blocks=1 00:06:05.516 00:06:05.516 ' 00:06:05.516 11:34:31 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:05.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.516 --rc genhtml_branch_coverage=1 00:06:05.516 --rc genhtml_function_coverage=1 00:06:05.516 --rc genhtml_legend=1 00:06:05.516 --rc geninfo_all_blocks=1 00:06:05.516 --rc geninfo_unexecuted_blocks=1 00:06:05.516 00:06:05.516 ' 00:06:05.516 11:34:31 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:05.516 11:34:31 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:05.516 11:34:31 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:05.516 11:34:31 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:05.516 11:34:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.516 11:34:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.516 11:34:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.516 ************************************ 00:06:05.516 START TEST default_locks 00:06:05.516 ************************************ 00:06:05.516 11:34:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:05.516 11:34:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2828632 00:06:05.516 11:34:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:05.517 11:34:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2828632 00:06:05.517 11:34:31 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2828632 ']' 00:06:05.517 11:34:31 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.517 11:34:31 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.517 11:34:31 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.517 11:34:31 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.517 11:34:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.517 [2024-11-18 11:34:31.353589] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:05.517 [2024-11-18 11:34:31.353734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2828632 ] 00:06:05.775 [2024-11-18 11:34:31.501358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.775 [2024-11-18 11:34:31.639005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.709 11:34:32 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.709 11:34:32 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:06.709 11:34:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2828632 00:06:06.709 11:34:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2828632 00:06:06.709 11:34:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.967 lslocks: write error 00:06:06.967 11:34:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2828632 00:06:06.967 11:34:32 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2828632 ']' 00:06:06.967 11:34:32 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2828632 00:06:06.967 11:34:32 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:06.967 11:34:32 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.967 11:34:32 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2828632 00:06:06.967 11:34:32 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.967 11:34:32 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.967 11:34:32 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2828632' 00:06:06.967 killing process with pid 2828632 00:06:06.967 11:34:32 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2828632 00:06:06.967 11:34:32 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2828632 00:06:09.511 11:34:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2828632 00:06:09.511 11:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:09.511 11:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2828632 00:06:09.511 11:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:09.511 11:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.511 11:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:09.511 11:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.511 11:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2828632 00:06:09.511 11:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2828632 ']' 00:06:09.511 11:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.511 11:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.511 11:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.511 11:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.511 11:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2828632) - No such process 00:06:09.511 ERROR: process (pid: 2828632) is no longer running 00:06:09.511 11:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.511 11:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:09.511 11:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:09.511 11:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:09.511 11:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:09.511 11:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:09.511 11:34:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:09.511 11:34:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:09.511 11:34:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:09.511 11:34:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:09.511 00:06:09.511 real 0m4.017s 00:06:09.511 user 0m4.035s 00:06:09.511 sys 0m0.700s 00:06:09.511 11:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.511 11:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.511 ************************************ 00:06:09.511 END TEST default_locks 00:06:09.511 ************************************ 00:06:09.511 11:34:35 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:09.511 11:34:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.511 11:34:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.511 11:34:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.511 ************************************ 00:06:09.511 START TEST default_locks_via_rpc 00:06:09.511 ************************************ 00:06:09.511 11:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:09.511 11:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2829188 00:06:09.512 11:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.512 11:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2829188 00:06:09.512 11:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2829188 ']' 00:06:09.512 11:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.512 11:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.512 11:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.512 11:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.512 11:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.770 [2024-11-18 11:34:35.423426] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:09.770 [2024-11-18 11:34:35.423611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829188 ] 00:06:09.770 [2024-11-18 11:34:35.559663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.029 [2024-11-18 11:34:35.694064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.963 11:34:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.963 11:34:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:10.963 11:34:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:10.963 11:34:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.963 11:34:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.963 11:34:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.963 11:34:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:10.963 11:34:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:10.963 11:34:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:10.963 11:34:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:10.963 11:34:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:10.963 11:34:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.963 11:34:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.963 11:34:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.963 11:34:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2829188 00:06:10.964 11:34:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2829188 00:06:10.964 11:34:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.222 11:34:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2829188 00:06:11.222 11:34:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2829188 ']' 00:06:11.222 11:34:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2829188 00:06:11.222 11:34:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:11.222 11:34:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.222 11:34:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2829188 00:06:11.222 11:34:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.222 11:34:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.223 11:34:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2829188' 00:06:11.223 killing process with pid 2829188 00:06:11.223 11:34:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2829188 00:06:11.223 11:34:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2829188 00:06:13.752 00:06:13.752 real 0m4.088s 00:06:13.752 user 0m4.091s 00:06:13.752 sys 0m0.738s 00:06:13.752 11:34:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.752 11:34:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.752 ************************************ 00:06:13.752 END TEST default_locks_via_rpc 00:06:13.752 ************************************ 00:06:13.752 11:34:39 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:13.752 11:34:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.752 11:34:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.752 11:34:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.752 ************************************ 00:06:13.752 START TEST non_locking_app_on_locked_coremask 00:06:13.752 ************************************ 00:06:13.752 11:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:13.752 11:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2829741 00:06:13.752 11:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.752 11:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2829741 /var/tmp/spdk.sock 00:06:13.752 11:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2829741 ']' 00:06:13.752 11:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.752 11:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.752 11:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.752 11:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.752 11:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.752 [2024-11-18 11:34:39.560712] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:13.752 [2024-11-18 11:34:39.560888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829741 ] 00:06:14.011 [2024-11-18 11:34:39.699575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.011 [2024-11-18 11:34:39.832900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.947 11:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.947 11:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:14.947 11:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2829884 00:06:14.947 11:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:14.947 11:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2829884 /var/tmp/spdk2.sock 00:06:14.947 11:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2829884 ']' 00:06:14.947 11:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.947 11:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.947 11:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.947 11:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.947 11:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.205 [2024-11-18 11:34:40.870666] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:15.205 [2024-11-18 11:34:40.870800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829884 ] 00:06:15.205 [2024-11-18 11:34:41.076060] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:15.205 [2024-11-18 11:34:41.076145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.772 [2024-11-18 11:34:41.360068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.673 11:34:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.673 11:34:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:17.673 11:34:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2829741 00:06:17.673 11:34:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2829741 00:06:17.673 11:34:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.239 lslocks: write error 00:06:18.239 11:34:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2829741 00:06:18.239 11:34:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2829741 ']' 00:06:18.239 11:34:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2829741 00:06:18.239 11:34:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:18.239 11:34:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.239 11:34:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2829741 00:06:18.239 11:34:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.239 11:34:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.239 11:34:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2829741' 00:06:18.239 killing process with pid 2829741 00:06:18.239 11:34:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2829741 00:06:18.239 11:34:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2829741 00:06:23.499 11:34:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2829884 00:06:23.499 11:34:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2829884 ']' 00:06:23.499 11:34:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2829884 00:06:23.499 11:34:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:23.499 11:34:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.499 11:34:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2829884 00:06:23.499 11:34:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.499 11:34:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.499 11:34:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2829884' 00:06:23.499 killing process with pid 2829884 00:06:23.499 11:34:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2829884 00:06:23.499 11:34:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2829884 00:06:25.402 00:06:25.402 real 0m11.769s 00:06:25.402 user 0m12.120s 00:06:25.402 sys 0m1.454s 00:06:25.402 11:34:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.402 11:34:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.402 ************************************ 00:06:25.402 END TEST non_locking_app_on_locked_coremask 00:06:25.402 ************************************ 00:06:25.402 11:34:51 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:25.402 11:34:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.402 11:34:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.402 11:34:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.402 ************************************ 00:06:25.402 START TEST locking_app_on_unlocked_coremask 00:06:25.402 ************************************ 00:06:25.402 11:34:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:25.402 11:34:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2831115 00:06:25.402 11:34:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:25.402 11:34:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2831115 /var/tmp/spdk.sock 00:06:25.402 11:34:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2831115 ']' 00:06:25.402 11:34:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.402 11:34:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.402 11:34:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.402 11:34:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.402 11:34:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.661 [2024-11-18 11:34:51.382930] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:25.661 [2024-11-18 11:34:51.383060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2831115 ] 00:06:25.661 [2024-11-18 11:34:51.529380] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:25.661 [2024-11-18 11:34:51.529436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.919 [2024-11-18 11:34:51.670735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.868 11:34:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.868 11:34:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:26.868 11:34:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2831259 00:06:26.868 11:34:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:26.868 11:34:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2831259 /var/tmp/spdk2.sock 00:06:26.868 11:34:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2831259 ']' 00:06:26.868 11:34:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.868 11:34:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.868 11:34:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.868 11:34:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.868 11:34:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.868 [2024-11-18 11:34:52.718777] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:26.868 [2024-11-18 11:34:52.718934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2831259 ] 00:06:27.126 [2024-11-18 11:34:52.938660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.384 [2024-11-18 11:34:53.224113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.912 11:34:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.912 11:34:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:29.912 11:34:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2831259 00:06:29.912 11:34:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2831259 00:06:29.913 11:34:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.913 lslocks: write error 00:06:29.913 11:34:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2831115 00:06:29.913 11:34:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2831115 ']' 00:06:29.913 11:34:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2831115 00:06:29.913 11:34:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:29.913 11:34:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.913 11:34:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2831115 00:06:30.173 11:34:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.173 11:34:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.173 11:34:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2831115' 00:06:30.173 killing process with pid 2831115 00:06:30.173 11:34:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2831115 00:06:30.173 11:34:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2831115 00:06:35.487 11:35:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2831259 00:06:35.487 11:35:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2831259 ']' 00:06:35.487 11:35:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2831259 00:06:35.487 11:35:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:35.487 11:35:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.487 11:35:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2831259 00:06:35.487 11:35:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.487 11:35:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.487 11:35:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2831259' 00:06:35.487 killing process with pid 2831259 00:06:35.487 11:35:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2831259 00:06:35.487 11:35:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2831259 00:06:37.389 00:06:37.389 real 0m11.940s 00:06:37.389 user 0m12.279s 00:06:37.389 sys 0m1.504s 00:06:37.389 11:35:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.389 11:35:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.389 ************************************ 00:06:37.389 END TEST locking_app_on_unlocked_coremask 00:06:37.389 ************************************ 00:06:37.389 11:35:03 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:37.389 11:35:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.389 11:35:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.389 11:35:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.389 ************************************ 00:06:37.389 START TEST locking_app_on_locked_coremask 00:06:37.389 ************************************ 00:06:37.389 11:35:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:37.647 11:35:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2832609 00:06:37.647 11:35:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:37.647 11:35:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2832609 /var/tmp/spdk.sock 00:06:37.647 11:35:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2832609 ']' 00:06:37.647 11:35:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.647 11:35:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.647 11:35:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.647 11:35:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.647 11:35:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.647 [2024-11-18 11:35:03.376026] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:37.647 [2024-11-18 11:35:03.376186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2832609 ] 00:06:37.647 [2024-11-18 11:35:03.513021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.906 [2024-11-18 11:35:03.644527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.876 11:35:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.876 11:35:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:38.876 11:35:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2832761 00:06:38.876 11:35:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:38.876 11:35:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2832761 /var/tmp/spdk2.sock 00:06:38.876 11:35:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:38.876 11:35:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2832761 /var/tmp/spdk2.sock 00:06:38.876 11:35:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:38.876 11:35:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.876 11:35:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:38.876 11:35:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.876 11:35:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2832761 /var/tmp/spdk2.sock 00:06:38.876 11:35:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2832761 ']' 00:06:38.876 11:35:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.876 11:35:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.876 11:35:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.876 11:35:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.876 11:35:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.876 [2024-11-18 11:35:04.706989] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:38.876 [2024-11-18 11:35:04.707140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2832761 ] 00:06:39.134 [2024-11-18 11:35:04.917020] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2832609 has claimed it. 00:06:39.134 [2024-11-18 11:35:04.917121] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:39.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2832761) - No such process 00:06:39.699 ERROR: process (pid: 2832761) is no longer running 00:06:39.699 11:35:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.699 11:35:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:39.699 11:35:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:39.699 11:35:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:39.699 11:35:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:39.699 11:35:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:39.699 11:35:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2832609 00:06:39.699 11:35:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2832609 00:06:39.699 11:35:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.957 lslocks: write error 00:06:39.957 11:35:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2832609 00:06:39.957 11:35:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2832609 ']' 00:06:39.957 11:35:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2832609 00:06:39.957 11:35:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:39.957 11:35:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.957 11:35:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2832609 00:06:39.957 11:35:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.957 11:35:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.957 11:35:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2832609' 00:06:39.957 killing process with pid 2832609 00:06:39.957 11:35:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2832609 00:06:39.957 11:35:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2832609 00:06:42.484 00:06:42.484 real 0m4.937s 00:06:42.484 user 0m5.252s 00:06:42.484 sys 0m0.926s 00:06:42.484 11:35:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.484 11:35:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.484 ************************************ 00:06:42.484 END TEST locking_app_on_locked_coremask 00:06:42.484 ************************************ 00:06:42.484 11:35:08 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:42.484 11:35:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.484 11:35:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.484 11:35:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.484 ************************************ 00:06:42.484 START TEST locking_overlapped_coremask 00:06:42.484 ************************************ 00:06:42.484 11:35:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:42.484 11:35:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2833191 00:06:42.484 11:35:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:42.484 11:35:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2833191 /var/tmp/spdk.sock 00:06:42.484 11:35:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2833191 ']' 00:06:42.484 11:35:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.484 11:35:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.484 11:35:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.484 11:35:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.484 11:35:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.484 [2024-11-18 11:35:08.363039] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:42.484 [2024-11-18 11:35:08.363204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2833191 ] 00:06:42.741 [2024-11-18 11:35:08.500751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:42.999 [2024-11-18 11:35:08.638835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.999 [2024-11-18 11:35:08.638903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.999 [2024-11-18 11:35:08.638908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.932 11:35:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.932 11:35:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:43.932 11:35:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2833330 00:06:43.932 11:35:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2833330 /var/tmp/spdk2.sock 00:06:43.932 11:35:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:43.932 11:35:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:43.932 11:35:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2833330 /var/tmp/spdk2.sock 00:06:43.932 11:35:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:43.932 11:35:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.932 11:35:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:43.932 11:35:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.932 11:35:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2833330 /var/tmp/spdk2.sock 00:06:43.932 11:35:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2833330 ']' 00:06:43.932 11:35:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.932 11:35:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.932 11:35:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.932 11:35:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.932 11:35:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.932 [2024-11-18 11:35:09.590042] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:43.932 [2024-11-18 11:35:09.590187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2833330 ] 00:06:43.932 [2024-11-18 11:35:09.788970] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2833191 has claimed it. 00:06:43.933 [2024-11-18 11:35:09.789062] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:44.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2833330) - No such process 00:06:44.498 ERROR: process (pid: 2833330) is no longer running 00:06:44.498 11:35:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.498 11:35:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:44.498 11:35:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:44.498 11:35:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:44.498 11:35:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:44.498 11:35:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:44.498 11:35:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:44.498 11:35:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:44.498 11:35:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:44.498 11:35:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:44.498 11:35:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2833191 00:06:44.498 11:35:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2833191 ']' 00:06:44.498 11:35:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2833191 00:06:44.498 11:35:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:44.498 11:35:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:44.498 11:35:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2833191 00:06:44.498 11:35:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:44.498 11:35:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:44.498 11:35:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2833191' 00:06:44.498 killing process with pid 2833191 00:06:44.498 11:35:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2833191 00:06:44.498 11:35:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2833191 00:06:47.028 00:06:47.028 real 0m4.220s 00:06:47.028 user 0m11.495s 00:06:47.028 sys 0m0.753s 00:06:47.028 11:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.028 11:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.028 ************************************ 00:06:47.028 END TEST locking_overlapped_coremask 00:06:47.028 ************************************ 00:06:47.028 11:35:12 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:47.028 11:35:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.028 11:35:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.028 11:35:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.028 ************************************ 00:06:47.028 START TEST locking_overlapped_coremask_via_rpc 00:06:47.028 ************************************ 00:06:47.028 11:35:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:47.028 11:35:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2833750 00:06:47.028 11:35:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:47.028 11:35:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2833750 /var/tmp/spdk.sock 00:06:47.028 11:35:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2833750 ']' 00:06:47.028 11:35:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.028 11:35:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.028 11:35:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.028 11:35:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.028 11:35:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.028 [2024-11-18 11:35:12.631968] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:47.028 [2024-11-18 11:35:12.632114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2833750 ] 00:06:47.028 [2024-11-18 11:35:12.778986] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:47.028 [2024-11-18 11:35:12.779048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:47.286 [2024-11-18 11:35:12.925975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.286 [2024-11-18 11:35:12.926023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.286 [2024-11-18 11:35:12.926033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.222 11:35:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.222 11:35:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:48.222 11:35:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2833899 00:06:48.222 11:35:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:48.222 11:35:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2833899 /var/tmp/spdk2.sock 00:06:48.222 11:35:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2833899 ']' 00:06:48.222 11:35:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.222 11:35:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.222 11:35:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.222 11:35:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.222 11:35:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.222 [2024-11-18 11:35:14.006563] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:48.222 [2024-11-18 11:35:14.006716] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2833899 ] 00:06:48.480 [2024-11-18 11:35:14.201372] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:48.480 [2024-11-18 11:35:14.201456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:48.738 [2024-11-18 11:35:14.460109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:48.738 [2024-11-18 11:35:14.463561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.738 [2024-11-18 11:35:14.463570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:51.267 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.267 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:51.267 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:51.267 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.267 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.267 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.267 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:51.267 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:51.268 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:51.268 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:51.268 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.268 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:51.268 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.268 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:51.268 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.268 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.268 [2024-11-18 11:35:16.809674] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2833750 has claimed it. 00:06:51.268 request: 00:06:51.268 { 00:06:51.268 "method": "framework_enable_cpumask_locks", 00:06:51.268 "req_id": 1 00:06:51.268 } 00:06:51.268 Got JSON-RPC error response 00:06:51.268 response: 00:06:51.268 { 00:06:51.268 "code": -32603, 00:06:51.268 "message": "Failed to claim CPU core: 2" 00:06:51.268 } 00:06:51.268 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:51.268 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:51.268 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:51.268 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:51.268 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:51.268 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2833750 /var/tmp/spdk.sock 00:06:51.268 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2833750 ']' 00:06:51.268 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.268 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.268 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.268 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.268 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.268 11:35:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.268 11:35:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:51.268 11:35:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2833899 /var/tmp/spdk2.sock 00:06:51.268 11:35:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2833899 ']' 00:06:51.268 11:35:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.268 11:35:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.268 11:35:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.268 11:35:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.268 11:35:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.526 11:35:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.526 11:35:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:51.526 11:35:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:51.526 11:35:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:51.526 11:35:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:51.526 11:35:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:51.526 00:06:51.526 real 0m4.875s 00:06:51.526 user 0m1.750s 00:06:51.526 sys 0m0.264s 00:06:51.526 11:35:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.526 11:35:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.526 ************************************ 00:06:51.526 END TEST locking_overlapped_coremask_via_rpc 00:06:51.526 ************************************ 00:06:51.784 11:35:17 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:51.784 11:35:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2833750 ]] 00:06:51.784 11:35:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2833750 00:06:51.784 11:35:17 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2833750 ']' 00:06:51.784 11:35:17 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2833750 00:06:51.784 11:35:17 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:51.784 11:35:17 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.784 11:35:17 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2833750 00:06:51.784 11:35:17 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.784 11:35:17 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.784 11:35:17 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2833750' 00:06:51.784 killing process with pid 2833750 00:06:51.784 11:35:17 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2833750 00:06:51.784 11:35:17 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2833750 00:06:54.313 11:35:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2833899 ]] 00:06:54.313 11:35:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2833899 00:06:54.313 11:35:19 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2833899 ']' 00:06:54.313 11:35:19 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2833899 00:06:54.313 11:35:19 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:54.313 11:35:19 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.313 11:35:19 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2833899 00:06:54.313 11:35:19 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:54.313 11:35:19 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:54.313 11:35:19 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2833899' 00:06:54.313 killing process with pid 2833899 00:06:54.313 11:35:19 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2833899 00:06:54.313 11:35:19 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2833899 00:06:56.215 11:35:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:56.215 11:35:21 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:56.215 11:35:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2833750 ]] 00:06:56.215 11:35:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2833750 00:06:56.215 11:35:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2833750 ']' 00:06:56.215 11:35:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2833750 00:06:56.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2833750) - No such process 00:06:56.215 11:35:21 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2833750 is not found' 00:06:56.215 Process with pid 2833750 is not found 00:06:56.215 11:35:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2833899 ]] 00:06:56.215 11:35:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2833899 00:06:56.215 11:35:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2833899 ']' 00:06:56.215 11:35:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2833899 00:06:56.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2833899) - No such process 00:06:56.215 11:35:21 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2833899 is not found' 00:06:56.215 Process with pid 2833899 is not found 00:06:56.215 11:35:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:56.215 00:06:56.215 real 0m50.864s 00:06:56.215 user 1m27.385s 00:06:56.215 sys 0m7.646s 00:06:56.215 11:35:21 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.215 11:35:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.215 ************************************ 00:06:56.215 END TEST cpu_locks 00:06:56.215 ************************************ 00:06:56.215 00:06:56.215 real 1m20.435s 00:06:56.215 user 2m25.391s 00:06:56.215 sys 0m12.235s 00:06:56.215 11:35:21 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.215 11:35:21 event -- common/autotest_common.sh@10 -- # set +x 00:06:56.215 ************************************ 00:06:56.215 END TEST event 00:06:56.215 ************************************ 00:06:56.215 11:35:21 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:56.215 11:35:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.215 11:35:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.215 11:35:21 -- common/autotest_common.sh@10 -- # set +x 00:06:56.215 ************************************ 00:06:56.215 START TEST thread 00:06:56.215 ************************************ 00:06:56.215 11:35:22 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:56.215 * Looking for test storage... 00:06:56.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:56.215 11:35:22 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:56.215 11:35:22 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:56.215 11:35:22 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:56.474 11:35:22 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:56.474 11:35:22 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.474 11:35:22 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.474 11:35:22 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.474 11:35:22 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.474 11:35:22 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.474 11:35:22 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.474 11:35:22 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.474 11:35:22 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.474 11:35:22 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.474 11:35:22 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.474 11:35:22 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.474 11:35:22 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:56.474 11:35:22 thread -- scripts/common.sh@345 -- # : 1 00:06:56.474 11:35:22 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.474 11:35:22 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.474 11:35:22 thread -- scripts/common.sh@365 -- # decimal 1 00:06:56.474 11:35:22 thread -- scripts/common.sh@353 -- # local d=1 00:06:56.474 11:35:22 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.474 11:35:22 thread -- scripts/common.sh@355 -- # echo 1 00:06:56.474 11:35:22 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.474 11:35:22 thread -- scripts/common.sh@366 -- # decimal 2 00:06:56.474 11:35:22 thread -- scripts/common.sh@353 -- # local d=2 00:06:56.474 11:35:22 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.474 11:35:22 thread -- scripts/common.sh@355 -- # echo 2 00:06:56.474 11:35:22 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.474 11:35:22 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.474 11:35:22 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.474 11:35:22 thread -- scripts/common.sh@368 -- # return 0 00:06:56.474 11:35:22 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.474 11:35:22 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:56.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.474 --rc genhtml_branch_coverage=1 00:06:56.474 --rc genhtml_function_coverage=1 00:06:56.474 --rc genhtml_legend=1 00:06:56.474 --rc geninfo_all_blocks=1 00:06:56.474 --rc geninfo_unexecuted_blocks=1 00:06:56.474 00:06:56.474 ' 00:06:56.474 11:35:22 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:56.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.474 --rc genhtml_branch_coverage=1 00:06:56.474 --rc genhtml_function_coverage=1 00:06:56.474 --rc genhtml_legend=1 00:06:56.474 --rc geninfo_all_blocks=1 00:06:56.474 --rc geninfo_unexecuted_blocks=1 00:06:56.474 00:06:56.474 ' 00:06:56.474 11:35:22 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:56.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.474 --rc genhtml_branch_coverage=1 00:06:56.474 --rc genhtml_function_coverage=1 00:06:56.474 --rc genhtml_legend=1 00:06:56.474 --rc geninfo_all_blocks=1 00:06:56.474 --rc geninfo_unexecuted_blocks=1 00:06:56.474 00:06:56.474 ' 00:06:56.474 11:35:22 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:56.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.474 --rc genhtml_branch_coverage=1 00:06:56.474 --rc genhtml_function_coverage=1 00:06:56.474 --rc genhtml_legend=1 00:06:56.474 --rc geninfo_all_blocks=1 00:06:56.474 --rc geninfo_unexecuted_blocks=1 00:06:56.474 00:06:56.475 ' 00:06:56.475 11:35:22 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:56.475 11:35:22 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:56.475 11:35:22 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.475 11:35:22 thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.475 ************************************ 00:06:56.475 START TEST thread_poller_perf 00:06:56.475 ************************************ 00:06:56.475 11:35:22 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:56.475 [2024-11-18 11:35:22.221228] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:56.475 [2024-11-18 11:35:22.221358] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2834943 ] 00:06:56.733 [2024-11-18 11:35:22.364685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.733 [2024-11-18 11:35:22.504624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.733 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:58.106 [2024-11-18T10:35:23.991Z] ====================================== 00:06:58.106 [2024-11-18T10:35:23.991Z] busy:2710765926 (cyc) 00:06:58.106 [2024-11-18T10:35:23.991Z] total_run_count: 281000 00:06:58.106 [2024-11-18T10:35:23.991Z] tsc_hz: 2700000000 (cyc) 00:06:58.106 [2024-11-18T10:35:23.991Z] ====================================== 00:06:58.106 [2024-11-18T10:35:23.991Z] poller_cost: 9646 (cyc), 3572 (nsec) 00:06:58.106 00:06:58.106 real 0m1.578s 00:06:58.106 user 0m1.431s 00:06:58.106 sys 0m0.139s 00:06:58.106 11:35:23 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.106 11:35:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:58.106 ************************************ 00:06:58.106 END TEST thread_poller_perf 00:06:58.106 ************************************ 00:06:58.106 11:35:23 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:58.106 11:35:23 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:58.106 11:35:23 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.106 11:35:23 thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.106 ************************************ 00:06:58.106 START TEST thread_poller_perf 00:06:58.106 ************************************ 00:06:58.106 11:35:23 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:58.106 [2024-11-18 11:35:23.859248] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:58.106 [2024-11-18 11:35:23.859386] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835212 ] 00:06:58.365 [2024-11-18 11:35:24.020584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.365 [2024-11-18 11:35:24.159111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.365 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:59.738 [2024-11-18T10:35:25.623Z] ====================================== 00:06:59.738 [2024-11-18T10:35:25.623Z] busy:2704951937 (cyc) 00:06:59.738 [2024-11-18T10:35:25.623Z] total_run_count: 3757000 00:06:59.738 [2024-11-18T10:35:25.623Z] tsc_hz: 2700000000 (cyc) 00:06:59.738 [2024-11-18T10:35:25.623Z] ====================================== 00:06:59.738 [2024-11-18T10:35:25.623Z] poller_cost: 719 (cyc), 266 (nsec) 00:06:59.738 00:06:59.738 real 0m1.599s 00:06:59.738 user 0m1.425s 00:06:59.738 sys 0m0.166s 00:06:59.738 11:35:25 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.738 11:35:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:59.738 ************************************ 00:06:59.739 END TEST thread_poller_perf 00:06:59.739 ************************************ 00:06:59.739 11:35:25 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:59.739 00:06:59.739 real 0m3.420s 00:06:59.739 user 0m2.997s 00:06:59.739 sys 0m0.421s 00:06:59.739 11:35:25 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.739 11:35:25 thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.739 ************************************ 00:06:59.739 END TEST thread 00:06:59.739 ************************************ 00:06:59.739 11:35:25 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:59.739 11:35:25 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:59.739 11:35:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.739 11:35:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.739 11:35:25 -- common/autotest_common.sh@10 -- # set +x 00:06:59.739 ************************************ 00:06:59.739 START TEST app_cmdline 00:06:59.739 ************************************ 00:06:59.739 11:35:25 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:59.739 * Looking for test storage... 00:06:59.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:59.739 11:35:25 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:59.739 11:35:25 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:59.739 11:35:25 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:59.997 11:35:25 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:59.997 11:35:25 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.997 11:35:25 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.997 11:35:25 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.997 11:35:25 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.997 11:35:25 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.997 11:35:25 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.997 11:35:25 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.997 11:35:25 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.997 11:35:25 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.997 11:35:25 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.998 11:35:25 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.998 11:35:25 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:59.998 11:35:25 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:59.998 11:35:25 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.998 11:35:25 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.998 11:35:25 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:59.998 11:35:25 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:59.998 11:35:25 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.998 11:35:25 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:59.998 11:35:25 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.998 11:35:25 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:59.998 11:35:25 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:59.998 11:35:25 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.998 11:35:25 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:59.998 11:35:25 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.998 11:35:25 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.998 11:35:25 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.998 11:35:25 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:59.998 11:35:25 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.998 11:35:25 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:59.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.998 --rc genhtml_branch_coverage=1 00:06:59.998 --rc genhtml_function_coverage=1 00:06:59.998 --rc genhtml_legend=1 00:06:59.998 --rc geninfo_all_blocks=1 00:06:59.998 --rc geninfo_unexecuted_blocks=1 00:06:59.998 00:06:59.998 ' 00:06:59.998 11:35:25 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:59.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.998 --rc genhtml_branch_coverage=1 00:06:59.998 --rc genhtml_function_coverage=1 00:06:59.998 --rc genhtml_legend=1 00:06:59.998 --rc geninfo_all_blocks=1 00:06:59.998 --rc geninfo_unexecuted_blocks=1 00:06:59.998 00:06:59.998 ' 00:06:59.998 11:35:25 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:59.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.998 --rc genhtml_branch_coverage=1 00:06:59.998 --rc genhtml_function_coverage=1 00:06:59.998 --rc genhtml_legend=1 00:06:59.998 --rc geninfo_all_blocks=1 00:06:59.998 --rc geninfo_unexecuted_blocks=1 00:06:59.998 00:06:59.998 ' 00:06:59.998 11:35:25 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:59.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.998 --rc genhtml_branch_coverage=1 00:06:59.998 --rc genhtml_function_coverage=1 00:06:59.998 --rc genhtml_legend=1 00:06:59.998 --rc geninfo_all_blocks=1 00:06:59.998 --rc geninfo_unexecuted_blocks=1 00:06:59.998 00:06:59.998 ' 00:06:59.998 11:35:25 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:59.998 11:35:25 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2835429 00:06:59.998 11:35:25 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:59.998 11:35:25 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2835429 00:06:59.998 11:35:25 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2835429 ']' 00:06:59.998 11:35:25 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.998 11:35:25 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.998 11:35:25 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.998 11:35:25 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.998 11:35:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:59.998 [2024-11-18 11:35:25.744987] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:59.998 [2024-11-18 11:35:25.745119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835429 ] 00:07:00.256 [2024-11-18 11:35:25.893175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.256 [2024-11-18 11:35:26.031612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.189 11:35:26 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.189 11:35:26 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:01.189 11:35:26 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:01.446 { 00:07:01.446 "version": "SPDK v25.01-pre git sha1 83e8405e4", 00:07:01.446 "fields": { 00:07:01.446 "major": 25, 00:07:01.446 "minor": 1, 00:07:01.446 "patch": 0, 00:07:01.446 "suffix": "-pre", 00:07:01.446 "commit": "83e8405e4" 00:07:01.446 } 00:07:01.446 } 00:07:01.446 11:35:27 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:01.446 11:35:27 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:01.446 11:35:27 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:01.446 11:35:27 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:01.446 11:35:27 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:01.446 11:35:27 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:01.446 11:35:27 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.446 11:35:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:01.446 11:35:27 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:01.446 11:35:27 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.704 11:35:27 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:01.704 11:35:27 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:01.704 11:35:27 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:01.704 11:35:27 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:01.704 11:35:27 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:01.704 11:35:27 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:01.704 11:35:27 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.704 11:35:27 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:01.704 11:35:27 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.704 11:35:27 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:01.704 11:35:27 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.704 11:35:27 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:01.704 11:35:27 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:01.704 11:35:27 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:01.963 request: 00:07:01.963 { 00:07:01.963 "method": "env_dpdk_get_mem_stats", 00:07:01.963 "req_id": 1 00:07:01.963 } 00:07:01.963 Got JSON-RPC error response 00:07:01.963 response: 00:07:01.963 { 00:07:01.963 "code": -32601, 00:07:01.963 "message": "Method not found" 00:07:01.963 } 00:07:01.963 11:35:27 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:01.963 11:35:27 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:01.963 11:35:27 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:01.963 11:35:27 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:01.963 11:35:27 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2835429 00:07:01.963 11:35:27 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2835429 ']' 00:07:01.963 11:35:27 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2835429 00:07:01.963 11:35:27 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:01.963 11:35:27 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.963 11:35:27 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2835429 00:07:01.963 11:35:27 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.963 11:35:27 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.963 11:35:27 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2835429' 00:07:01.963 killing process with pid 2835429 00:07:01.963 11:35:27 app_cmdline -- common/autotest_common.sh@973 -- # kill 2835429 00:07:01.963 11:35:27 app_cmdline -- common/autotest_common.sh@978 -- # wait 2835429 00:07:04.492 00:07:04.492 real 0m4.609s 00:07:04.492 user 0m5.173s 00:07:04.492 sys 0m0.714s 00:07:04.492 11:35:30 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.492 11:35:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:04.492 ************************************ 00:07:04.492 END TEST app_cmdline 00:07:04.492 ************************************ 00:07:04.492 11:35:30 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:04.492 11:35:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.493 11:35:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.493 11:35:30 -- common/autotest_common.sh@10 -- # set +x 00:07:04.493 ************************************ 00:07:04.493 START TEST version 00:07:04.493 ************************************ 00:07:04.493 11:35:30 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:04.493 * Looking for test storage... 00:07:04.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:04.493 11:35:30 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:04.493 11:35:30 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:04.493 11:35:30 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:04.493 11:35:30 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:04.493 11:35:30 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.493 11:35:30 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.493 11:35:30 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.493 11:35:30 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.493 11:35:30 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.493 11:35:30 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.493 11:35:30 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.493 11:35:30 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.493 11:35:30 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.493 11:35:30 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.493 11:35:30 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.493 11:35:30 version -- scripts/common.sh@344 -- # case "$op" in 00:07:04.493 11:35:30 version -- scripts/common.sh@345 -- # : 1 00:07:04.493 11:35:30 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.493 11:35:30 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.493 11:35:30 version -- scripts/common.sh@365 -- # decimal 1 00:07:04.493 11:35:30 version -- scripts/common.sh@353 -- # local d=1 00:07:04.493 11:35:30 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.493 11:35:30 version -- scripts/common.sh@355 -- # echo 1 00:07:04.493 11:35:30 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.493 11:35:30 version -- scripts/common.sh@366 -- # decimal 2 00:07:04.493 11:35:30 version -- scripts/common.sh@353 -- # local d=2 00:07:04.493 11:35:30 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.493 11:35:30 version -- scripts/common.sh@355 -- # echo 2 00:07:04.493 11:35:30 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.493 11:35:30 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.493 11:35:30 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.493 11:35:30 version -- scripts/common.sh@368 -- # return 0 00:07:04.493 11:35:30 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.493 11:35:30 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:04.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.493 --rc genhtml_branch_coverage=1 00:07:04.493 --rc genhtml_function_coverage=1 00:07:04.493 --rc genhtml_legend=1 00:07:04.493 --rc geninfo_all_blocks=1 00:07:04.493 --rc geninfo_unexecuted_blocks=1 00:07:04.493 00:07:04.493 ' 00:07:04.493 11:35:30 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:04.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.493 --rc genhtml_branch_coverage=1 00:07:04.493 --rc genhtml_function_coverage=1 00:07:04.493 --rc genhtml_legend=1 00:07:04.493 --rc geninfo_all_blocks=1 00:07:04.493 --rc geninfo_unexecuted_blocks=1 00:07:04.493 00:07:04.493 ' 00:07:04.493 11:35:30 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:04.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.493 --rc genhtml_branch_coverage=1 00:07:04.493 --rc genhtml_function_coverage=1 00:07:04.493 --rc genhtml_legend=1 00:07:04.493 --rc geninfo_all_blocks=1 00:07:04.493 --rc geninfo_unexecuted_blocks=1 00:07:04.493 00:07:04.493 ' 00:07:04.493 11:35:30 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:04.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.493 --rc genhtml_branch_coverage=1 00:07:04.493 --rc genhtml_function_coverage=1 00:07:04.493 --rc genhtml_legend=1 00:07:04.493 --rc geninfo_all_blocks=1 00:07:04.493 --rc geninfo_unexecuted_blocks=1 00:07:04.493 00:07:04.493 ' 00:07:04.493 11:35:30 version -- app/version.sh@17 -- # get_header_version major 00:07:04.493 11:35:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:04.493 11:35:30 version -- app/version.sh@14 -- # cut -f2 00:07:04.493 11:35:30 version -- app/version.sh@14 -- # tr -d '"' 00:07:04.493 11:35:30 version -- app/version.sh@17 -- # major=25 00:07:04.493 11:35:30 version -- app/version.sh@18 -- # get_header_version minor 00:07:04.493 11:35:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:04.493 11:35:30 version -- app/version.sh@14 -- # cut -f2 00:07:04.493 11:35:30 version -- app/version.sh@14 -- # tr -d '"' 00:07:04.493 11:35:30 version -- app/version.sh@18 -- # minor=1 00:07:04.493 11:35:30 version -- app/version.sh@19 -- # get_header_version patch 00:07:04.493 11:35:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:04.493 11:35:30 version -- app/version.sh@14 -- # cut -f2 00:07:04.493 11:35:30 version -- app/version.sh@14 -- # tr -d '"' 00:07:04.493 11:35:30 version -- app/version.sh@19 -- # patch=0 00:07:04.493 11:35:30 version -- app/version.sh@20 -- # get_header_version suffix 00:07:04.493 11:35:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:04.493 11:35:30 version -- app/version.sh@14 -- # cut -f2 00:07:04.493 11:35:30 version -- app/version.sh@14 -- # tr -d '"' 00:07:04.493 11:35:30 version -- app/version.sh@20 -- # suffix=-pre 00:07:04.493 11:35:30 version -- app/version.sh@22 -- # version=25.1 00:07:04.493 11:35:30 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:04.493 11:35:30 version -- app/version.sh@28 -- # version=25.1rc0 00:07:04.493 11:35:30 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:04.493 11:35:30 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:04.493 11:35:30 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:04.493 11:35:30 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:04.493 00:07:04.493 real 0m0.215s 00:07:04.493 user 0m0.136s 00:07:04.493 sys 0m0.105s 00:07:04.493 11:35:30 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.493 11:35:30 version -- common/autotest_common.sh@10 -- # set +x 00:07:04.493 ************************************ 00:07:04.493 END TEST version 00:07:04.493 ************************************ 00:07:04.752 11:35:30 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:04.752 11:35:30 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:04.752 11:35:30 -- spdk/autotest.sh@194 -- # uname -s 00:07:04.752 11:35:30 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:04.752 11:35:30 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:04.752 11:35:30 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:04.752 11:35:30 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:04.752 11:35:30 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:04.752 11:35:30 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:04.752 11:35:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:04.752 11:35:30 -- common/autotest_common.sh@10 -- # set +x 00:07:04.752 11:35:30 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:04.752 11:35:30 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:04.752 11:35:30 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:04.752 11:35:30 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:04.752 11:35:30 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:04.752 11:35:30 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:04.752 11:35:30 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:04.752 11:35:30 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:04.752 11:35:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.752 11:35:30 -- common/autotest_common.sh@10 -- # set +x 00:07:04.752 ************************************ 00:07:04.752 START TEST nvmf_tcp 00:07:04.752 ************************************ 00:07:04.752 11:35:30 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:04.752 * Looking for test storage... 00:07:04.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:04.752 11:35:30 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:04.752 11:35:30 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:04.752 11:35:30 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:04.752 11:35:30 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:04.752 11:35:30 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.752 11:35:30 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.752 11:35:30 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.752 11:35:30 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.752 11:35:30 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.752 11:35:30 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.752 11:35:30 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.752 11:35:30 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.752 11:35:30 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.752 11:35:30 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.752 11:35:30 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.752 11:35:30 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:04.752 11:35:30 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:04.752 11:35:30 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.752 11:35:30 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.752 11:35:30 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:04.752 11:35:30 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:04.752 11:35:30 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.752 11:35:30 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:04.752 11:35:30 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.752 11:35:30 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:04.752 11:35:30 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:04.752 11:35:30 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.752 11:35:30 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:04.752 11:35:30 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.752 11:35:30 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.752 11:35:30 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.752 11:35:30 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:04.752 11:35:30 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.752 11:35:30 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:04.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.752 --rc genhtml_branch_coverage=1 00:07:04.752 --rc genhtml_function_coverage=1 00:07:04.752 --rc genhtml_legend=1 00:07:04.752 --rc geninfo_all_blocks=1 00:07:04.752 --rc geninfo_unexecuted_blocks=1 00:07:04.752 00:07:04.752 ' 00:07:04.752 11:35:30 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:04.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.752 --rc genhtml_branch_coverage=1 00:07:04.752 --rc genhtml_function_coverage=1 00:07:04.752 --rc genhtml_legend=1 00:07:04.752 --rc geninfo_all_blocks=1 00:07:04.752 --rc geninfo_unexecuted_blocks=1 00:07:04.752 00:07:04.752 ' 00:07:04.752 11:35:30 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:04.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.752 --rc genhtml_branch_coverage=1 00:07:04.752 --rc genhtml_function_coverage=1 00:07:04.752 --rc genhtml_legend=1 00:07:04.752 --rc geninfo_all_blocks=1 00:07:04.752 --rc geninfo_unexecuted_blocks=1 00:07:04.752 00:07:04.752 ' 00:07:04.752 11:35:30 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:04.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.752 --rc genhtml_branch_coverage=1 00:07:04.752 --rc genhtml_function_coverage=1 00:07:04.752 --rc genhtml_legend=1 00:07:04.752 --rc geninfo_all_blocks=1 00:07:04.752 --rc geninfo_unexecuted_blocks=1 00:07:04.752 00:07:04.752 ' 00:07:04.752 11:35:30 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:04.752 11:35:30 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:04.752 11:35:30 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:04.752 11:35:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:04.752 11:35:30 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.752 11:35:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:04.752 ************************************ 00:07:04.752 START TEST nvmf_target_core 00:07:04.752 ************************************ 00:07:04.752 11:35:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:04.752 * Looking for test storage... 00:07:04.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:04.752 11:35:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:04.752 11:35:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:07:04.752 11:35:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:05.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.012 --rc genhtml_branch_coverage=1 00:07:05.012 --rc genhtml_function_coverage=1 00:07:05.012 --rc genhtml_legend=1 00:07:05.012 --rc geninfo_all_blocks=1 00:07:05.012 --rc geninfo_unexecuted_blocks=1 00:07:05.012 00:07:05.012 ' 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:05.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.012 --rc genhtml_branch_coverage=1 00:07:05.012 --rc genhtml_function_coverage=1 00:07:05.012 --rc genhtml_legend=1 00:07:05.012 --rc geninfo_all_blocks=1 00:07:05.012 --rc geninfo_unexecuted_blocks=1 00:07:05.012 00:07:05.012 ' 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:05.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.012 --rc genhtml_branch_coverage=1 00:07:05.012 --rc genhtml_function_coverage=1 00:07:05.012 --rc genhtml_legend=1 00:07:05.012 --rc geninfo_all_blocks=1 00:07:05.012 --rc geninfo_unexecuted_blocks=1 00:07:05.012 00:07:05.012 ' 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:05.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.012 --rc genhtml_branch_coverage=1 00:07:05.012 --rc genhtml_function_coverage=1 00:07:05.012 --rc genhtml_legend=1 00:07:05.012 --rc geninfo_all_blocks=1 00:07:05.012 --rc geninfo_unexecuted_blocks=1 00:07:05.012 00:07:05.012 ' 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:05.012 11:35:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:05.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:05.013 ************************************ 00:07:05.013 START TEST nvmf_abort 00:07:05.013 ************************************ 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:05.013 * Looking for test storage... 00:07:05.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.013 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:05.272 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.272 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:05.272 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:05.272 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.272 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:05.272 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.272 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.272 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.272 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:05.272 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.272 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:05.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.272 --rc genhtml_branch_coverage=1 00:07:05.272 --rc genhtml_function_coverage=1 00:07:05.272 --rc genhtml_legend=1 00:07:05.272 --rc geninfo_all_blocks=1 00:07:05.272 --rc geninfo_unexecuted_blocks=1 00:07:05.272 00:07:05.272 ' 00:07:05.272 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:05.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.272 --rc genhtml_branch_coverage=1 00:07:05.272 --rc genhtml_function_coverage=1 00:07:05.272 --rc genhtml_legend=1 00:07:05.272 --rc geninfo_all_blocks=1 00:07:05.272 --rc geninfo_unexecuted_blocks=1 00:07:05.272 00:07:05.272 ' 00:07:05.272 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:05.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.272 --rc genhtml_branch_coverage=1 00:07:05.272 --rc genhtml_function_coverage=1 00:07:05.272 --rc genhtml_legend=1 00:07:05.272 --rc geninfo_all_blocks=1 00:07:05.272 --rc geninfo_unexecuted_blocks=1 00:07:05.272 00:07:05.273 ' 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:05.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.273 --rc genhtml_branch_coverage=1 00:07:05.273 --rc genhtml_function_coverage=1 00:07:05.273 --rc genhtml_legend=1 00:07:05.273 --rc geninfo_all_blocks=1 00:07:05.273 --rc geninfo_unexecuted_blocks=1 00:07:05.273 00:07:05.273 ' 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:05.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:05.273 11:35:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:07.174 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:07.174 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:07.174 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:07.175 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:07.175 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:07.175 11:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:07.175 11:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:07.175 11:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:07.175 11:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:07.175 11:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:07.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:07.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:07:07.175 00:07:07.175 --- 10.0.0.2 ping statistics --- 00:07:07.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.175 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:07:07.175 11:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:07.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:07.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:07:07.175 00:07:07.175 --- 10.0.0.1 ping statistics --- 00:07:07.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.175 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:07:07.175 11:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:07.175 11:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:07.175 11:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:07.175 11:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:07.175 11:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:07.175 11:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:07.175 11:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:07.175 11:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:07.175 11:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:07.175 11:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:07.175 11:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:07.175 11:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:07.175 11:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:07.175 11:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2837907 00:07:07.175 11:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:07.175 11:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2837907 00:07:07.175 11:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2837907 ']' 00:07:07.175 11:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.175 11:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.175 11:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.175 11:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.175 11:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:07.434 [2024-11-18 11:35:33.143674] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:07.434 [2024-11-18 11:35:33.143823] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:07.434 [2024-11-18 11:35:33.296743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:07.692 [2024-11-18 11:35:33.441326] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:07.692 [2024-11-18 11:35:33.441418] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:07.692 [2024-11-18 11:35:33.441444] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:07.692 [2024-11-18 11:35:33.441468] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:07.692 [2024-11-18 11:35:33.441488] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:07.692 [2024-11-18 11:35:33.444252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.692 [2024-11-18 11:35:33.444308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.692 [2024-11-18 11:35:33.444313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:08.258 11:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.258 11:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:08.258 11:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:08.258 11:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:08.258 11:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:08.258 11:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:08.258 11:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:08.258 11:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.258 11:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:08.258 [2024-11-18 11:35:34.135120] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:08.258 11:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.258 11:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:08.258 11:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.258 11:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:08.516 Malloc0 00:07:08.516 11:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.516 11:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:08.516 11:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.516 11:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:08.516 Delay0 00:07:08.516 11:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.516 11:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:08.516 11:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.516 11:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:08.516 11:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.516 11:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:08.516 11:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.516 11:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:08.516 11:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.516 11:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:08.516 11:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.516 11:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:08.516 [2024-11-18 11:35:34.255639] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:08.516 11:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.516 11:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:08.516 11:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.517 11:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:08.517 11:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.517 11:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:08.777 [2024-11-18 11:35:34.463647] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:10.698 Initializing NVMe Controllers 00:07:10.698 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:10.698 controller IO queue size 128 less than required 00:07:10.698 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:10.698 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:10.698 Initialization complete. Launching workers. 00:07:10.698 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 22674 00:07:10.698 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 22731, failed to submit 66 00:07:10.698 success 22674, unsuccessful 57, failed 0 00:07:10.956 11:35:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:10.956 11:35:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.956 11:35:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:10.956 11:35:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.956 11:35:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:10.956 11:35:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:10.956 11:35:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:10.956 11:35:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:10.956 11:35:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:10.956 11:35:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:10.956 11:35:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:10.956 11:35:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:10.956 rmmod nvme_tcp 00:07:10.956 rmmod nvme_fabrics 00:07:10.956 rmmod nvme_keyring 00:07:10.956 11:35:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:10.956 11:35:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:10.956 11:35:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:10.956 11:35:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2837907 ']' 00:07:10.956 11:35:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2837907 00:07:10.956 11:35:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2837907 ']' 00:07:10.956 11:35:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2837907 00:07:10.956 11:35:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:10.956 11:35:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.956 11:35:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2837907 00:07:10.956 11:35:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:10.956 11:35:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:10.956 11:35:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2837907' 00:07:10.956 killing process with pid 2837907 00:07:10.956 11:35:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2837907 00:07:10.956 11:35:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2837907 00:07:12.334 11:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:12.334 11:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:12.334 11:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:12.334 11:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:12.334 11:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:07:12.334 11:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:12.334 11:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:07:12.334 11:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:12.334 11:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:12.334 11:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.334 11:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:12.334 11:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.241 11:35:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:14.241 00:07:14.241 real 0m9.206s 00:07:14.241 user 0m15.459s 00:07:14.241 sys 0m2.700s 00:07:14.241 11:35:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.241 11:35:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:14.241 ************************************ 00:07:14.241 END TEST nvmf_abort 00:07:14.241 ************************************ 00:07:14.241 11:35:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:14.241 11:35:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:14.241 11:35:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.241 11:35:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:14.241 ************************************ 00:07:14.241 START TEST nvmf_ns_hotplug_stress 00:07:14.241 ************************************ 00:07:14.241 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:14.241 * Looking for test storage... 00:07:14.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:14.241 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:14.241 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:07:14.241 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:14.500 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:14.500 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:14.500 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:14.500 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:14.500 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:14.500 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:14.500 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:14.500 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:14.500 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:14.500 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:14.500 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:14.500 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:14.500 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:14.500 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:14.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.501 --rc genhtml_branch_coverage=1 00:07:14.501 --rc genhtml_function_coverage=1 00:07:14.501 --rc genhtml_legend=1 00:07:14.501 --rc geninfo_all_blocks=1 00:07:14.501 --rc geninfo_unexecuted_blocks=1 00:07:14.501 00:07:14.501 ' 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:14.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.501 --rc genhtml_branch_coverage=1 00:07:14.501 --rc genhtml_function_coverage=1 00:07:14.501 --rc genhtml_legend=1 00:07:14.501 --rc geninfo_all_blocks=1 00:07:14.501 --rc geninfo_unexecuted_blocks=1 00:07:14.501 00:07:14.501 ' 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:14.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.501 --rc genhtml_branch_coverage=1 00:07:14.501 --rc genhtml_function_coverage=1 00:07:14.501 --rc genhtml_legend=1 00:07:14.501 --rc geninfo_all_blocks=1 00:07:14.501 --rc geninfo_unexecuted_blocks=1 00:07:14.501 00:07:14.501 ' 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:14.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.501 --rc genhtml_branch_coverage=1 00:07:14.501 --rc genhtml_function_coverage=1 00:07:14.501 --rc genhtml_legend=1 00:07:14.501 --rc geninfo_all_blocks=1 00:07:14.501 --rc geninfo_unexecuted_blocks=1 00:07:14.501 00:07:14.501 ' 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:14.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:14.501 11:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:16.406 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:16.406 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:16.406 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:16.406 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:16.406 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:16.406 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:16.406 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:16.406 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:16.406 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:16.406 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:16.406 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:16.406 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:16.406 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:16.406 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:16.406 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:16.406 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:16.406 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:16.406 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:16.406 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:16.406 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:16.406 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:16.406 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:16.406 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:16.406 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:16.406 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:16.406 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:16.406 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:16.406 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:16.406 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:16.406 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:16.406 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:16.406 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:16.406 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:16.406 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:16.406 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:16.406 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:16.406 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:16.406 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:16.666 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:16.666 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:16.666 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:16.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:16.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:07:16.666 00:07:16.666 --- 10.0.0.2 ping statistics --- 00:07:16.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.666 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:16.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:16.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:07:16.666 00:07:16.666 --- 10.0.0.1 ping statistics --- 00:07:16.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.666 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:07:16.666 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:16.667 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:16.667 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:16.667 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:16.667 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:16.667 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:16.667 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:16.667 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:16.667 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:16.667 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:16.667 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:16.667 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:16.667 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:16.667 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2840421 00:07:16.667 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:16.667 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2840421 00:07:16.667 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2840421 ']' 00:07:16.667 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.667 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.667 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.667 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.667 11:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:16.925 [2024-11-18 11:35:42.552690] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:16.925 [2024-11-18 11:35:42.552831] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.925 [2024-11-18 11:35:42.705987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:17.183 [2024-11-18 11:35:42.850199] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:17.183 [2024-11-18 11:35:42.850279] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:17.183 [2024-11-18 11:35:42.850305] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:17.183 [2024-11-18 11:35:42.850330] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:17.183 [2024-11-18 11:35:42.850349] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:17.183 [2024-11-18 11:35:42.853037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.183 [2024-11-18 11:35:42.853088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.183 [2024-11-18 11:35:42.853092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:17.749 11:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.749 11:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:17.749 11:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:17.749 11:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:17.749 11:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:17.749 11:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:17.749 11:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:17.749 11:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:18.007 [2024-11-18 11:35:43.785152] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:18.007 11:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:18.264 11:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:18.521 [2024-11-18 11:35:44.331218] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:18.521 11:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:18.779 11:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:19.037 Malloc0 00:07:19.295 11:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:19.553 Delay0 00:07:19.553 11:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.811 11:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:20.068 NULL1 00:07:20.068 11:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:20.326 11:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2840849 00:07:20.326 11:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:20.326 11:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2840849 00:07:20.326 11:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.584 11:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.842 11:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:20.842 11:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:21.099 true 00:07:21.100 11:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2840849 00:07:21.100 11:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.357 11:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.616 11:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:21.616 11:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:21.874 true 00:07:21.874 11:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2840849 00:07:21.874 11:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.132 11:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.389 11:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:22.389 11:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:22.650 true 00:07:22.650 11:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2840849 00:07:22.650 11:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.583 Read completed with error (sct=0, sc=11) 00:07:23.583 11:35:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.841 11:35:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:23.841 11:35:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:24.099 true 00:07:24.099 11:35:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2840849 00:07:24.099 11:35:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.665 11:35:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.923 11:35:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:24.923 11:35:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:25.181 true 00:07:25.181 11:35:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2840849 00:07:25.181 11:35:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.439 11:35:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.698 11:35:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:25.699 11:35:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:25.957 true 00:07:25.957 11:35:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2840849 00:07:25.957 11:35:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.892 11:35:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.150 11:35:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:27.150 11:35:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:27.408 true 00:07:27.408 11:35:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2840849 00:07:27.408 11:35:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.666 11:35:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.924 11:35:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:27.924 11:35:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:28.182 true 00:07:28.182 11:35:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2840849 00:07:28.182 11:35:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.117 11:35:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.376 11:35:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:29.376 11:35:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:29.634 true 00:07:29.634 11:35:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2840849 00:07:29.634 11:35:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.892 11:35:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.150 11:35:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:30.150 11:35:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:30.408 true 00:07:30.408 11:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2840849 00:07:30.408 11:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.342 11:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.342 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.342 11:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:31.342 11:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:31.909 true 00:07:31.909 11:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2840849 00:07:31.909 11:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.909 11:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.167 11:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:32.167 11:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:32.424 true 00:07:32.424 11:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2840849 00:07:32.424 11:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.990 11:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.990 11:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:32.990 11:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:33.248 true 00:07:33.248 11:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2840849 00:07:33.248 11:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.625 11:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.625 11:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:34.625 11:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:34.883 true 00:07:34.883 11:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2840849 00:07:34.883 11:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.141 11:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.399 11:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:35.399 11:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:35.657 true 00:07:35.657 11:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2840849 00:07:35.657 11:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.222 11:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.222 11:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:36.222 11:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:36.479 true 00:07:36.737 11:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2840849 00:07:36.737 11:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.302 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.560 11:36:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.818 11:36:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:37.818 11:36:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:38.076 true 00:07:38.076 11:36:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2840849 00:07:38.076 11:36:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.333 11:36:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.591 11:36:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:38.591 11:36:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:38.849 true 00:07:38.849 11:36:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2840849 00:07:38.849 11:36:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.786 11:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.786 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.786 11:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:39.786 11:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:40.072 true 00:07:40.072 11:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2840849 00:07:40.072 11:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.353 11:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.611 11:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:40.611 11:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:40.869 true 00:07:40.869 11:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2840849 00:07:40.869 11:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.128 11:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.386 11:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:41.386 11:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:41.644 true 00:07:41.644 11:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2840849 00:07:41.644 11:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.017 11:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.017 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.017 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.017 11:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:43.017 11:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:43.275 true 00:07:43.275 11:36:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2840849 00:07:43.275 11:36:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.532 11:36:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.789 11:36:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:43.789 11:36:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:44.047 true 00:07:44.047 11:36:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2840849 00:07:44.047 11:36:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.305 11:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.870 11:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:44.870 11:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:44.870 true 00:07:44.870 11:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2840849 00:07:44.870 11:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.804 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.804 11:36:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.062 11:36:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:46.062 11:36:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:46.321 true 00:07:46.321 11:36:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2840849 00:07:46.321 11:36:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.579 11:36:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.145 11:36:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:47.145 11:36:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:47.145 true 00:07:47.145 11:36:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2840849 00:07:47.145 11:36:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.078 11:36:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.336 11:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:48.336 11:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:48.595 true 00:07:48.595 11:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2840849 00:07:48.595 11:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.852 11:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.110 11:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:49.110 11:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:49.368 true 00:07:49.368 11:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2840849 00:07:49.368 11:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.625 11:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.894 11:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:49.894 11:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:50.151 true 00:07:50.151 11:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2840849 00:07:50.151 11:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.085 Initializing NVMe Controllers 00:07:51.085 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:51.085 Controller IO queue size 128, less than required. 00:07:51.085 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:51.086 Controller IO queue size 128, less than required. 00:07:51.086 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:51.086 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:51.086 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:51.086 Initialization complete. Launching workers. 00:07:51.086 ======================================================== 00:07:51.086 Latency(us) 00:07:51.086 Device Information : IOPS MiB/s Average min max 00:07:51.086 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 473.27 0.23 110941.09 2971.11 1013321.52 00:07:51.086 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 6411.03 3.13 19902.09 5044.25 482387.71 00:07:51.086 ======================================================== 00:07:51.086 Total : 6884.30 3.36 26160.64 2971.11 1013321.52 00:07:51.086 00:07:51.086 11:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.343 11:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:51.343 11:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:51.601 true 00:07:51.601 11:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2840849 00:07:51.601 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2840849) - No such process 00:07:51.601 11:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2840849 00:07:51.601 11:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.859 11:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:52.117 11:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:52.117 11:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:52.117 11:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:52.117 11:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:52.117 11:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:52.376 null0 00:07:52.376 11:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:52.376 11:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:52.376 11:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:52.634 null1 00:07:52.634 11:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:52.634 11:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:52.634 11:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:52.892 null2 00:07:52.892 11:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:52.892 11:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:52.892 11:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:53.150 null3 00:07:53.408 11:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:53.408 11:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:53.408 11:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:53.408 null4 00:07:53.665 11:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:53.665 11:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:53.665 11:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:53.922 null5 00:07:53.922 11:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:53.922 11:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:53.922 11:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:54.180 null6 00:07:54.180 11:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:54.180 11:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:54.180 11:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:54.438 null7 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2845661 2845662 2845663 2845666 2845668 2845670 2845672 2845674 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.438 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:54.696 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:54.696 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:54.696 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:54.696 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:54.696 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:54.696 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:54.696 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.696 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:54.954 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.954 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.954 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:54.954 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.954 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.954 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:54.954 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.954 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.954 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:54.954 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.954 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.954 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:54.954 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.954 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.954 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:54.954 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.954 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.954 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:54.954 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.954 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.954 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:54.954 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.954 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.954 11:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:55.212 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:55.212 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:55.212 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:55.212 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:55.212 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:55.212 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:55.212 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.212 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:55.470 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.470 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.470 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:55.470 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.470 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.470 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:55.470 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.470 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.470 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:55.470 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.470 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.470 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.470 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.470 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:55.470 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:55.470 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.470 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.470 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:55.470 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.470 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.470 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:55.728 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.728 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.728 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:55.986 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:55.986 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:55.986 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:55.986 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:55.986 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:55.986 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:55.986 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.986 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:56.245 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.245 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.245 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:56.245 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.245 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.245 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:56.245 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.245 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.245 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:56.245 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.245 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.245 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:56.245 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.245 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.245 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:56.245 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.245 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.245 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:56.245 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.245 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.245 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:56.245 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.245 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.245 11:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:56.503 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:56.503 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:56.503 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:56.503 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:56.503 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:56.503 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:56.503 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.503 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:56.760 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.760 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.760 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:56.760 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.760 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.760 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:56.760 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.760 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.760 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:56.760 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.760 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.760 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:56.760 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.760 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.760 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:56.760 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.760 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.761 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:56.761 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.761 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.761 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:56.761 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.761 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.761 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:57.018 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:57.018 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:57.018 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:57.018 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:57.018 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:57.018 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:57.018 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.018 11:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:57.276 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.276 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.276 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:57.276 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.276 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.276 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:57.276 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.276 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.276 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:57.276 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.276 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.276 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:57.276 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.276 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.276 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:57.276 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.276 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.276 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:57.276 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.276 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.276 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:57.276 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.276 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.276 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:57.842 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:57.842 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:57.842 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:57.842 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:57.842 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:57.842 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:57.842 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:57.842 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.100 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.100 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.100 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:58.100 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.100 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.100 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:58.100 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.100 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.100 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:58.100 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.100 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.100 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:58.100 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.100 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.100 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:58.100 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.100 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.100 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:58.100 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.100 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.100 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:58.100 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.100 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.100 11:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:58.359 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:58.359 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:58.359 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:58.359 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:58.359 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:58.359 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:58.359 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.359 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:58.617 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.617 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.617 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:58.617 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.617 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.617 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:58.617 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.617 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.617 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:58.617 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.617 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.617 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:58.617 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.617 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.617 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.618 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:58.618 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.618 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:58.618 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.618 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.618 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:58.618 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.618 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.618 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:58.876 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:58.876 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:58.876 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:58.876 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:58.876 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:58.876 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.876 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:58.876 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:59.135 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.135 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.135 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:59.135 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.135 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.135 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:59.135 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.135 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.135 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:59.135 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.135 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.135 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.135 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.135 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:59.135 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:59.135 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.135 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.135 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:59.135 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.135 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.135 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:59.135 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.135 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.135 11:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:59.393 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:59.393 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:59.393 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:59.393 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.393 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:59.651 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:59.651 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:59.651 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:59.909 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.909 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.909 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:59.909 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.909 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.909 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:59.909 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.909 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.909 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:59.909 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.909 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.909 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:59.909 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.909 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.909 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:59.909 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.909 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.909 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:59.909 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.909 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.909 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:59.909 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.909 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.909 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:00.167 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:00.167 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:00.167 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:00.167 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:00.167 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.167 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:00.167 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:00.167 11:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:00.426 rmmod nvme_tcp 00:08:00.426 rmmod nvme_fabrics 00:08:00.426 rmmod nvme_keyring 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2840421 ']' 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2840421 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2840421 ']' 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2840421 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2840421 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2840421' 00:08:00.426 killing process with pid 2840421 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2840421 00:08:00.426 11:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2840421 00:08:01.832 11:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:01.832 11:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:01.832 11:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:01.832 11:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:01.832 11:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:08:01.832 11:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:01.832 11:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:08:01.832 11:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:01.832 11:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:01.832 11:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.832 11:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:01.832 11:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.763 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:03.763 00:08:03.763 real 0m49.477s 00:08:03.763 user 3m46.711s 00:08:03.763 sys 0m16.426s 00:08:03.763 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.763 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:03.763 ************************************ 00:08:03.763 END TEST nvmf_ns_hotplug_stress 00:08:03.763 ************************************ 00:08:03.763 11:36:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:03.763 11:36:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:03.763 11:36:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.763 11:36:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:03.763 ************************************ 00:08:03.763 START TEST nvmf_delete_subsystem 00:08:03.763 ************************************ 00:08:03.763 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:03.763 * Looking for test storage... 00:08:03.763 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:03.763 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:03.763 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:08:03.763 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:04.023 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:04.023 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:04.023 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:04.023 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:04.023 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:04.023 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:04.023 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:04.023 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:04.023 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:04.023 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:04.023 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:04.023 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:04.023 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:04.023 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:04.023 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:04.023 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:04.023 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:04.023 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:04.023 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:04.023 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:04.023 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:04.023 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:04.023 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:04.023 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:04.023 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:04.023 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:04.023 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:04.023 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:04.023 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:04.023 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:04.023 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:04.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.023 --rc genhtml_branch_coverage=1 00:08:04.023 --rc genhtml_function_coverage=1 00:08:04.023 --rc genhtml_legend=1 00:08:04.023 --rc geninfo_all_blocks=1 00:08:04.023 --rc geninfo_unexecuted_blocks=1 00:08:04.023 00:08:04.023 ' 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:04.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.024 --rc genhtml_branch_coverage=1 00:08:04.024 --rc genhtml_function_coverage=1 00:08:04.024 --rc genhtml_legend=1 00:08:04.024 --rc geninfo_all_blocks=1 00:08:04.024 --rc geninfo_unexecuted_blocks=1 00:08:04.024 00:08:04.024 ' 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:04.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.024 --rc genhtml_branch_coverage=1 00:08:04.024 --rc genhtml_function_coverage=1 00:08:04.024 --rc genhtml_legend=1 00:08:04.024 --rc geninfo_all_blocks=1 00:08:04.024 --rc geninfo_unexecuted_blocks=1 00:08:04.024 00:08:04.024 ' 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:04.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.024 --rc genhtml_branch_coverage=1 00:08:04.024 --rc genhtml_function_coverage=1 00:08:04.024 --rc genhtml_legend=1 00:08:04.024 --rc geninfo_all_blocks=1 00:08:04.024 --rc geninfo_unexecuted_blocks=1 00:08:04.024 00:08:04.024 ' 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:04.024 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:04.024 11:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:05.930 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:05.930 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:05.930 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:05.930 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:05.930 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:05.930 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:05.930 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:05.930 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:05.930 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:05.930 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:05.930 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:05.930 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:05.930 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:05.930 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:05.930 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:05.930 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:05.930 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:05.930 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:05.930 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:05.930 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:05.930 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:05.930 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:05.930 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:05.930 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:05.930 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:05.930 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:05.930 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:05.930 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:05.930 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:05.930 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:05.930 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:05.930 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:05.930 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:05.930 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:05.930 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:05.930 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:05.930 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:05.930 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:05.931 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:05.931 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:05.931 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:05.931 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:06.190 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:06.190 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:06.190 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:06.190 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:06.190 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:06.190 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:06.190 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:06.190 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:06.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:06.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:08:06.190 00:08:06.190 --- 10.0.0.2 ping statistics --- 00:08:06.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.190 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:08:06.190 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:06.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:06.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:08:06.190 00:08:06.190 --- 10.0.0.1 ping statistics --- 00:08:06.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.190 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:08:06.190 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:06.190 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:06.190 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:06.190 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:06.190 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:06.190 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:06.190 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:06.190 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:06.190 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:06.190 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:06.190 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:06.190 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:06.190 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:06.190 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2848695 00:08:06.190 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:06.190 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2848695 00:08:06.190 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2848695 ']' 00:08:06.190 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.190 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.190 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.190 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.190 11:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:06.190 [2024-11-18 11:36:32.027856] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:06.190 [2024-11-18 11:36:32.027994] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.449 [2024-11-18 11:36:32.190598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:06.449 [2024-11-18 11:36:32.329573] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.449 [2024-11-18 11:36:32.329672] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.449 [2024-11-18 11:36:32.329699] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.449 [2024-11-18 11:36:32.329723] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.449 [2024-11-18 11:36:32.329743] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.449 [2024-11-18 11:36:32.332399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.449 [2024-11-18 11:36:32.332401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.386 11:36:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.386 11:36:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:08:07.386 11:36:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:07.386 11:36:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:07.386 11:36:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:07.386 11:36:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:07.386 11:36:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:07.386 11:36:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.386 11:36:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:07.386 [2024-11-18 11:36:32.989470] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.386 11:36:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.386 11:36:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:07.386 11:36:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.386 11:36:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:07.386 11:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.386 11:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:07.386 11:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.386 11:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:07.386 [2024-11-18 11:36:33.007239] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:07.386 11:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.386 11:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:07.386 11:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.386 11:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:07.386 NULL1 00:08:07.386 11:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.386 11:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:07.386 11:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.386 11:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:07.386 Delay0 00:08:07.386 11:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.386 11:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.386 11:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.386 11:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:07.386 11:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.386 11:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2848842 00:08:07.386 11:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:07.386 11:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:07.386 [2024-11-18 11:36:33.141578] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:09.289 11:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:09.289 11:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.289 11:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 starting I/O failed: -6 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 starting I/O failed: -6 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 starting I/O failed: -6 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 starting I/O failed: -6 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 starting I/O failed: -6 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 starting I/O failed: -6 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 starting I/O failed: -6 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 starting I/O failed: -6 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 starting I/O failed: -6 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 starting I/O failed: -6 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 starting I/O failed: -6 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 [2024-11-18 11:36:35.402287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016600 is same with the state(6) to be set 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 starting I/O failed: -6 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 starting I/O failed: -6 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 starting I/O failed: -6 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 starting I/O failed: -6 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 starting I/O failed: -6 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 starting I/O failed: -6 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 starting I/O failed: -6 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 starting I/O failed: -6 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 starting I/O failed: -6 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 starting I/O failed: -6 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 starting I/O failed: -6 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 starting I/O failed: -6 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 [2024-11-18 11:36:35.404211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020380 is same with the state(6) to be set 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Write completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 Read completed with error (sct=0, sc=8) 00:08:09.550 [2024-11-18 11:36:35.405260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fe80 is same with the state(6) to be set 00:08:10.484 [2024-11-18 11:36:36.364870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015c00 is same with the state(6) to be set 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Write completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Write completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Write completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Write completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 [2024-11-18 11:36:36.405107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(6) to be set 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Write completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Write completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Write completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Write completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 [2024-11-18 11:36:36.406000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016880 is same with the state(6) to be set 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Write completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Write completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Write completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Write completed with error (sct=0, sc=8) 00:08:10.742 Write completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Write completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Write completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Write completed with error (sct=0, sc=8) 00:08:10.742 Write completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 [2024-11-18 11:36:36.407582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020600 is same with the state(6) to be set 00:08:10.742 Write completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Write completed with error (sct=0, sc=8) 00:08:10.742 Write completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Write completed with error (sct=0, sc=8) 00:08:10.742 Write completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Write completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Write completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Read completed with error (sct=0, sc=8) 00:08:10.742 Write completed with error (sct=0, sc=8) 00:08:10.742 [2024-11-18 11:36:36.407982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(6) to be set 00:08:10.742 11:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.742 11:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:10.742 11:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2848842 00:08:10.742 11:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:10.742 Initializing NVMe Controllers 00:08:10.742 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:10.742 Controller IO queue size 128, less than required. 00:08:10.742 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:10.742 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:10.742 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:10.742 Initialization complete. Launching workers. 00:08:10.742 ======================================================== 00:08:10.742 Latency(us) 00:08:10.742 Device Information : IOPS MiB/s Average min max 00:08:10.743 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 166.46 0.08 906059.61 947.67 1014601.80 00:08:10.743 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 176.37 0.09 883982.87 1103.23 1017612.64 00:08:10.743 ======================================================== 00:08:10.743 Total : 342.83 0.17 894702.21 947.67 1017612.64 00:08:10.743 00:08:10.743 [2024-11-18 11:36:36.412851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000015c00 (9): Bad file descriptor 00:08:10.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:11.309 11:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:11.309 11:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2848842 00:08:11.309 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2848842) - No such process 00:08:11.309 11:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2848842 00:08:11.309 11:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:11.309 11:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2848842 00:08:11.309 11:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:11.309 11:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:11.309 11:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:11.309 11:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:11.309 11:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2848842 00:08:11.309 11:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:11.309 11:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:11.309 11:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:11.309 11:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:11.309 11:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:11.309 11:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.309 11:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:11.309 11:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.309 11:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:11.309 11:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.309 11:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:11.309 [2024-11-18 11:36:36.931855] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:11.309 11:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.309 11:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.309 11:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.309 11:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:11.309 11:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.309 11:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2849259 00:08:11.309 11:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:11.309 11:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:11.309 11:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2849259 00:08:11.309 11:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:11.309 [2024-11-18 11:36:37.056102] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:11.568 11:36:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:11.568 11:36:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2849259 00:08:11.568 11:36:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:12.133 11:36:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:12.133 11:36:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2849259 00:08:12.133 11:36:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:12.698 11:36:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:12.699 11:36:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2849259 00:08:12.699 11:36:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:13.264 11:36:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:13.264 11:36:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2849259 00:08:13.264 11:36:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:13.829 11:36:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:13.829 11:36:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2849259 00:08:13.829 11:36:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:14.087 11:36:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:14.087 11:36:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2849259 00:08:14.087 11:36:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:14.651 Initializing NVMe Controllers 00:08:14.651 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:14.651 Controller IO queue size 128, less than required. 00:08:14.651 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:14.651 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:14.651 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:14.651 Initialization complete. Launching workers. 00:08:14.651 ======================================================== 00:08:14.651 Latency(us) 00:08:14.651 Device Information : IOPS MiB/s Average min max 00:08:14.651 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005610.24 1000219.14 1044016.11 00:08:14.651 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005953.08 1000241.30 1015966.11 00:08:14.651 ======================================================== 00:08:14.651 Total : 256.00 0.12 1005781.66 1000219.14 1044016.11 00:08:14.651 00:08:14.651 11:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:14.651 11:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2849259 00:08:14.651 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2849259) - No such process 00:08:14.651 11:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2849259 00:08:14.651 11:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:14.651 11:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:14.651 11:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:14.651 11:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:14.651 11:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:14.651 11:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:14.651 11:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:14.651 11:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:14.651 rmmod nvme_tcp 00:08:14.651 rmmod nvme_fabrics 00:08:14.651 rmmod nvme_keyring 00:08:14.651 11:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:14.651 11:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:14.651 11:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:14.651 11:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2848695 ']' 00:08:14.651 11:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2848695 00:08:14.651 11:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2848695 ']' 00:08:14.651 11:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2848695 00:08:14.651 11:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:14.651 11:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:14.651 11:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2848695 00:08:14.909 11:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:14.909 11:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:14.909 11:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2848695' 00:08:14.909 killing process with pid 2848695 00:08:14.909 11:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2848695 00:08:14.909 11:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2848695 00:08:15.843 11:36:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:15.843 11:36:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:15.843 11:36:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:15.843 11:36:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:15.843 11:36:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:15.843 11:36:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:15.843 11:36:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:15.843 11:36:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:15.843 11:36:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:15.843 11:36:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.843 11:36:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:15.843 11:36:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:18.377 00:08:18.377 real 0m14.227s 00:08:18.377 user 0m31.147s 00:08:18.377 sys 0m3.279s 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:18.377 ************************************ 00:08:18.377 END TEST nvmf_delete_subsystem 00:08:18.377 ************************************ 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:18.377 ************************************ 00:08:18.377 START TEST nvmf_host_management 00:08:18.377 ************************************ 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:18.377 * Looking for test storage... 00:08:18.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:18.377 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:18.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.378 --rc genhtml_branch_coverage=1 00:08:18.378 --rc genhtml_function_coverage=1 00:08:18.378 --rc genhtml_legend=1 00:08:18.378 --rc geninfo_all_blocks=1 00:08:18.378 --rc geninfo_unexecuted_blocks=1 00:08:18.378 00:08:18.378 ' 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:18.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.378 --rc genhtml_branch_coverage=1 00:08:18.378 --rc genhtml_function_coverage=1 00:08:18.378 --rc genhtml_legend=1 00:08:18.378 --rc geninfo_all_blocks=1 00:08:18.378 --rc geninfo_unexecuted_blocks=1 00:08:18.378 00:08:18.378 ' 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:18.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.378 --rc genhtml_branch_coverage=1 00:08:18.378 --rc genhtml_function_coverage=1 00:08:18.378 --rc genhtml_legend=1 00:08:18.378 --rc geninfo_all_blocks=1 00:08:18.378 --rc geninfo_unexecuted_blocks=1 00:08:18.378 00:08:18.378 ' 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:18.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.378 --rc genhtml_branch_coverage=1 00:08:18.378 --rc genhtml_function_coverage=1 00:08:18.378 --rc genhtml_legend=1 00:08:18.378 --rc geninfo_all_blocks=1 00:08:18.378 --rc geninfo_unexecuted_blocks=1 00:08:18.378 00:08:18.378 ' 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:18.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:18.378 11:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:20.280 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:20.280 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:20.280 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:20.280 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:20.280 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:20.280 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:20.280 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:20.280 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:20.280 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:20.280 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:20.280 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:20.280 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:20.280 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:20.280 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:20.280 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:20.280 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:20.280 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:20.280 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:20.281 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:20.281 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:20.281 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:20.281 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:20.281 11:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:20.281 11:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:20.281 11:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:20.281 11:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:20.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:20.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:08:20.281 00:08:20.281 --- 10.0.0.2 ping statistics --- 00:08:20.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.281 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:08:20.281 11:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:20.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:20.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:08:20.281 00:08:20.281 --- 10.0.0.1 ping statistics --- 00:08:20.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.281 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:08:20.281 11:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:20.281 11:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:20.281 11:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:20.281 11:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:20.281 11:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:20.281 11:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:20.281 11:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:20.281 11:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:20.281 11:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:20.281 11:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:20.281 11:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:20.281 11:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:20.281 11:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:20.281 11:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:20.281 11:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:20.281 11:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2851742 00:08:20.282 11:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:20.282 11:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2851742 00:08:20.282 11:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2851742 ']' 00:08:20.282 11:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.282 11:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:20.282 11:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.282 11:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:20.282 11:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:20.282 [2024-11-18 11:36:46.128945] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:20.282 [2024-11-18 11:36:46.129089] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:20.540 [2024-11-18 11:36:46.274931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:20.540 [2024-11-18 11:36:46.407445] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:20.540 [2024-11-18 11:36:46.407536] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:20.540 [2024-11-18 11:36:46.407564] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:20.540 [2024-11-18 11:36:46.407589] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:20.540 [2024-11-18 11:36:46.407609] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:20.540 [2024-11-18 11:36:46.410497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:20.540 [2024-11-18 11:36:46.410609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:20.540 [2024-11-18 11:36:46.410655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.540 [2024-11-18 11:36:46.410661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:21.473 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:21.473 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:21.473 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:21.473 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:21.473 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:21.473 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:21.473 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:21.473 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.473 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:21.474 [2024-11-18 11:36:47.145537] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:21.474 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.474 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:21.474 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:21.474 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:21.474 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:21.474 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:21.474 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:21.474 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.474 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:21.474 Malloc0 00:08:21.474 [2024-11-18 11:36:47.270393] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:21.474 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.474 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:21.474 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:21.474 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:21.474 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2851918 00:08:21.474 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2851918 /var/tmp/bdevperf.sock 00:08:21.474 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2851918 ']' 00:08:21.474 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:21.474 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:21.474 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:21.474 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.474 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:21.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:21.474 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:21.474 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.474 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:21.474 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:21.474 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:21.474 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:21.474 { 00:08:21.474 "params": { 00:08:21.474 "name": "Nvme$subsystem", 00:08:21.474 "trtype": "$TEST_TRANSPORT", 00:08:21.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:21.474 "adrfam": "ipv4", 00:08:21.474 "trsvcid": "$NVMF_PORT", 00:08:21.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:21.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:21.474 "hdgst": ${hdgst:-false}, 00:08:21.474 "ddgst": ${ddgst:-false} 00:08:21.474 }, 00:08:21.474 "method": "bdev_nvme_attach_controller" 00:08:21.474 } 00:08:21.474 EOF 00:08:21.474 )") 00:08:21.474 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:21.474 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:21.474 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:21.474 11:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:21.474 "params": { 00:08:21.474 "name": "Nvme0", 00:08:21.474 "trtype": "tcp", 00:08:21.474 "traddr": "10.0.0.2", 00:08:21.474 "adrfam": "ipv4", 00:08:21.474 "trsvcid": "4420", 00:08:21.474 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:21.474 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:21.474 "hdgst": false, 00:08:21.474 "ddgst": false 00:08:21.474 }, 00:08:21.474 "method": "bdev_nvme_attach_controller" 00:08:21.474 }' 00:08:21.732 [2024-11-18 11:36:47.391135] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:21.732 [2024-11-18 11:36:47.391275] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2851918 ] 00:08:21.732 [2024-11-18 11:36:47.531848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.990 [2024-11-18 11:36:47.660544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.249 Running I/O for 10 seconds... 00:08:22.507 11:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.507 11:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:22.507 11:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:22.507 11:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.507 11:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:22.507 11:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.507 11:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:22.507 11:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:22.507 11:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:22.507 11:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:22.507 11:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:22.507 11:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:22.507 11:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:22.507 11:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:22.507 11:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:22.507 11:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:22.507 11:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.507 11:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:22.766 11:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.766 11:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=387 00:08:22.766 11:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 387 -ge 100 ']' 00:08:22.766 11:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:22.766 11:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:22.766 11:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:22.766 11:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:22.766 11:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.767 11:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:22.767 [2024-11-18 11:36:48.427850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:22.767 [2024-11-18 11:36:48.427953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.767 [2024-11-18 11:36:48.427983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:22.767 [2024-11-18 11:36:48.428005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.767 [2024-11-18 11:36:48.428027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:22.767 [2024-11-18 11:36:48.428048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.767 [2024-11-18 11:36:48.428080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:22.767 [2024-11-18 11:36:48.428103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.767 [2024-11-18 11:36:48.428123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:08:22.767 11:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.767 11:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:22.767 11:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.767 11:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:22.767 [2024-11-18 11:36:48.437630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.767 [2024-11-18 11:36:48.437670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.767 [2024-11-18 11:36:48.437711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.767 [2024-11-18 11:36:48.437735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.767 [2024-11-18 11:36:48.437761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.767 [2024-11-18 11:36:48.437825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.767 [2024-11-18 11:36:48.437852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.767 [2024-11-18 11:36:48.437885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.767 [2024-11-18 11:36:48.437909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.767 [2024-11-18 11:36:48.437931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.767 [2024-11-18 11:36:48.437972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.767 [2024-11-18 11:36:48.437995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.767 [2024-11-18 11:36:48.438020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.767 [2024-11-18 11:36:48.438042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.767 [2024-11-18 11:36:48.438066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.767 [2024-11-18 11:36:48.438088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.767 [2024-11-18 11:36:48.438113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.767 [2024-11-18 11:36:48.438135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.767 [2024-11-18 11:36:48.438160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.767 [2024-11-18 11:36:48.438189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.767 [2024-11-18 11:36:48.438215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.767 [2024-11-18 11:36:48.438238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.767 [2024-11-18 11:36:48.438262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.767 [2024-11-18 11:36:48.438285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.767 [2024-11-18 11:36:48.438311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.767 11:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.767 [2024-11-18 11:36:48.438333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.767 [2024-11-18 11:36:48.438357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.767 [2024-11-18 11:36:48.438381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.767 [2024-11-18 11:36:48.438407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.767 [2024-11-18 11:36:48.438430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.767 11:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:22.767 [2024-11-18 11:36:48.438455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.767 [2024-11-18 11:36:48.438478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.767 [2024-11-18 11:36:48.438513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.767 [2024-11-18 11:36:48.438547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.767 [2024-11-18 11:36:48.438571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.767 [2024-11-18 11:36:48.438593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.767 [2024-11-18 11:36:48.438617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.767 [2024-11-18 11:36:48.438641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.767 [2024-11-18 11:36:48.438665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.767 [2024-11-18 11:36:48.438688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.767 [2024-11-18 11:36:48.438716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.767 [2024-11-18 11:36:48.438739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.767 [2024-11-18 11:36:48.438779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.767 [2024-11-18 11:36:48.438802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.767 [2024-11-18 11:36:48.438826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.767 [2024-11-18 11:36:48.438856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.767 [2024-11-18 11:36:48.438880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.767 [2024-11-18 11:36:48.438913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.767 [2024-11-18 11:36:48.438938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.767 [2024-11-18 11:36:48.438960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.767 [2024-11-18 11:36:48.438984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.767 [2024-11-18 11:36:48.439006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.767 [2024-11-18 11:36:48.439031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.767 [2024-11-18 11:36:48.439054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.767 [2024-11-18 11:36:48.439078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.767 [2024-11-18 11:36:48.439101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.767 [2024-11-18 11:36:48.439127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.767 [2024-11-18 11:36:48.439149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.767 [2024-11-18 11:36:48.439191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.767 [2024-11-18 11:36:48.439214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.767 [2024-11-18 11:36:48.439237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.767 [2024-11-18 11:36:48.439260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.767 [2024-11-18 11:36:48.439284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.768 [2024-11-18 11:36:48.439305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.768 [2024-11-18 11:36:48.439329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.768 [2024-11-18 11:36:48.439350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.768 [2024-11-18 11:36:48.439374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.768 [2024-11-18 11:36:48.439401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.768 [2024-11-18 11:36:48.439426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.768 [2024-11-18 11:36:48.439448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.768 [2024-11-18 11:36:48.439488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.768 [2024-11-18 11:36:48.439521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.768 [2024-11-18 11:36:48.439555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.768 [2024-11-18 11:36:48.439577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.768 [2024-11-18 11:36:48.439601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.768 [2024-11-18 11:36:48.439624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.768 [2024-11-18 11:36:48.439647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.768 [2024-11-18 11:36:48.439669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.768 [2024-11-18 11:36:48.439693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.768 [2024-11-18 11:36:48.439715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.768 [2024-11-18 11:36:48.439739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.768 [2024-11-18 11:36:48.439760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.768 [2024-11-18 11:36:48.439794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.768 [2024-11-18 11:36:48.439830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.768 [2024-11-18 11:36:48.439862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.768 [2024-11-18 11:36:48.439884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.768 [2024-11-18 11:36:48.439907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.768 [2024-11-18 11:36:48.439928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.768 [2024-11-18 11:36:48.439951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.768 [2024-11-18 11:36:48.439973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.768 [2024-11-18 11:36:48.439996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.768 [2024-11-18 11:36:48.440018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.768 [2024-11-18 11:36:48.440046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.768 [2024-11-18 11:36:48.440067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.768 [2024-11-18 11:36:48.440098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.768 [2024-11-18 11:36:48.440121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.768 [2024-11-18 11:36:48.440144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.768 [2024-11-18 11:36:48.440166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.768 [2024-11-18 11:36:48.440190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.768 [2024-11-18 11:36:48.440211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.768 [2024-11-18 11:36:48.440235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.768 [2024-11-18 11:36:48.440256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.768 [2024-11-18 11:36:48.440280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.768 [2024-11-18 11:36:48.440301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.768 [2024-11-18 11:36:48.440325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.768 [2024-11-18 11:36:48.440347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.768 [2024-11-18 11:36:48.440370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.768 [2024-11-18 11:36:48.440392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.768 [2024-11-18 11:36:48.440415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.768 [2024-11-18 11:36:48.440436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.768 [2024-11-18 11:36:48.440460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.768 [2024-11-18 11:36:48.440505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.768 [2024-11-18 11:36:48.440539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.768 [2024-11-18 11:36:48.440564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.768 [2024-11-18 11:36:48.440589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.768 [2024-11-18 11:36:48.440612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.768 [2024-11-18 11:36:48.440636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.768 [2024-11-18 11:36:48.440667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.768 [2024-11-18 11:36:48.440692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.768 [2024-11-18 11:36:48.440715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.768 [2024-11-18 11:36:48.440739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.768 [2024-11-18 11:36:48.440761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.768 [2024-11-18 11:36:48.440792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.768 [2024-11-18 11:36:48.440830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.768 [2024-11-18 11:36:48.440864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.768 [2024-11-18 11:36:48.440885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.768 [2024-11-18 11:36:48.440916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.768 [2024-11-18 11:36:48.440939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:22.768 [2024-11-18 11:36:48.441294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:08:22.768 [2024-11-18 11:36:48.442520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:22.768 task offset: 57344 on job bdev=Nvme0n1 fails 00:08:22.768 00:08:22.768 Latency(us) 00:08:22.768 [2024-11-18T10:36:48.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:22.768 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:22.768 Job: Nvme0n1 ended in about 0.35 seconds with error 00:08:22.768 Verification LBA range: start 0x0 length 0x400 00:08:22.768 Nvme0n1 : 0.35 1290.08 80.63 184.30 0.00 41909.74 4102.07 40972.14 00:08:22.768 [2024-11-18T10:36:48.653Z] =================================================================================================================== 00:08:22.768 [2024-11-18T10:36:48.653Z] Total : 1290.08 80.63 184.30 0.00 41909.74 4102.07 40972.14 00:08:22.768 [2024-11-18 11:36:48.447380] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:22.768 [2024-11-18 11:36:48.539712] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:23.703 11:36:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2851918 00:08:23.703 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2851918) - No such process 00:08:23.703 11:36:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:23.703 11:36:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:23.703 11:36:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:23.703 11:36:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:23.703 11:36:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:23.703 11:36:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:23.703 11:36:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:23.703 11:36:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:23.703 { 00:08:23.703 "params": { 00:08:23.703 "name": "Nvme$subsystem", 00:08:23.703 "trtype": "$TEST_TRANSPORT", 00:08:23.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:23.703 "adrfam": "ipv4", 00:08:23.703 "trsvcid": "$NVMF_PORT", 00:08:23.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:23.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:23.703 "hdgst": ${hdgst:-false}, 00:08:23.703 "ddgst": ${ddgst:-false} 00:08:23.703 }, 00:08:23.703 "method": "bdev_nvme_attach_controller" 00:08:23.703 } 00:08:23.703 EOF 00:08:23.703 )") 00:08:23.703 11:36:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:23.703 11:36:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:23.703 11:36:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:23.703 11:36:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:23.703 "params": { 00:08:23.703 "name": "Nvme0", 00:08:23.703 "trtype": "tcp", 00:08:23.703 "traddr": "10.0.0.2", 00:08:23.703 "adrfam": "ipv4", 00:08:23.703 "trsvcid": "4420", 00:08:23.703 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:23.703 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:23.703 "hdgst": false, 00:08:23.703 "ddgst": false 00:08:23.703 }, 00:08:23.703 "method": "bdev_nvme_attach_controller" 00:08:23.703 }' 00:08:23.703 [2024-11-18 11:36:49.525220] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:23.703 [2024-11-18 11:36:49.525369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2852199 ] 00:08:23.961 [2024-11-18 11:36:49.660842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.961 [2024-11-18 11:36:49.790716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.528 Running I/O for 1 seconds... 00:08:25.463 1344.00 IOPS, 84.00 MiB/s 00:08:25.463 Latency(us) 00:08:25.463 [2024-11-18T10:36:51.348Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.463 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:25.463 Verification LBA range: start 0x0 length 0x400 00:08:25.463 Nvme0n1 : 1.02 1383.87 86.49 0.00 0.00 45439.40 6844.87 40389.59 00:08:25.463 [2024-11-18T10:36:51.348Z] =================================================================================================================== 00:08:25.463 [2024-11-18T10:36:51.348Z] Total : 1383.87 86.49 0.00 0.00 45439.40 6844.87 40389.59 00:08:26.397 11:36:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:26.397 11:36:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:26.397 11:36:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:26.397 11:36:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:26.397 11:36:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:26.397 11:36:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:26.397 11:36:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:26.397 11:36:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:26.397 11:36:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:26.397 11:36:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:26.397 11:36:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:26.397 rmmod nvme_tcp 00:08:26.397 rmmod nvme_fabrics 00:08:26.397 rmmod nvme_keyring 00:08:26.397 11:36:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:26.397 11:36:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:26.397 11:36:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:26.397 11:36:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2851742 ']' 00:08:26.397 11:36:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2851742 00:08:26.397 11:36:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2851742 ']' 00:08:26.397 11:36:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2851742 00:08:26.397 11:36:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:26.397 11:36:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:26.397 11:36:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2851742 00:08:26.397 11:36:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:26.397 11:36:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:26.397 11:36:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2851742' 00:08:26.397 killing process with pid 2851742 00:08:26.397 11:36:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2851742 00:08:26.397 11:36:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2851742 00:08:27.770 [2024-11-18 11:36:53.368574] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:27.770 11:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:27.770 11:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:27.770 11:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:27.770 11:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:27.770 11:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:27.770 11:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:27.770 11:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:27.770 11:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:27.770 11:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:27.770 11:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.770 11:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:27.770 11:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.674 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:29.674 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:29.674 00:08:29.674 real 0m11.699s 00:08:29.674 user 0m31.983s 00:08:29.674 sys 0m3.075s 00:08:29.674 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.674 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:29.674 ************************************ 00:08:29.674 END TEST nvmf_host_management 00:08:29.674 ************************************ 00:08:29.674 11:36:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:29.674 11:36:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:29.674 11:36:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.674 11:36:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:29.934 ************************************ 00:08:29.934 START TEST nvmf_lvol 00:08:29.934 ************************************ 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:29.934 * Looking for test storage... 00:08:29.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:29.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.934 --rc genhtml_branch_coverage=1 00:08:29.934 --rc genhtml_function_coverage=1 00:08:29.934 --rc genhtml_legend=1 00:08:29.934 --rc geninfo_all_blocks=1 00:08:29.934 --rc geninfo_unexecuted_blocks=1 00:08:29.934 00:08:29.934 ' 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:29.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.934 --rc genhtml_branch_coverage=1 00:08:29.934 --rc genhtml_function_coverage=1 00:08:29.934 --rc genhtml_legend=1 00:08:29.934 --rc geninfo_all_blocks=1 00:08:29.934 --rc geninfo_unexecuted_blocks=1 00:08:29.934 00:08:29.934 ' 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:29.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.934 --rc genhtml_branch_coverage=1 00:08:29.934 --rc genhtml_function_coverage=1 00:08:29.934 --rc genhtml_legend=1 00:08:29.934 --rc geninfo_all_blocks=1 00:08:29.934 --rc geninfo_unexecuted_blocks=1 00:08:29.934 00:08:29.934 ' 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:29.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.934 --rc genhtml_branch_coverage=1 00:08:29.934 --rc genhtml_function_coverage=1 00:08:29.934 --rc genhtml_legend=1 00:08:29.934 --rc geninfo_all_blocks=1 00:08:29.934 --rc geninfo_unexecuted_blocks=1 00:08:29.934 00:08:29.934 ' 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:29.934 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:29.935 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:29.935 11:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:31.840 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:31.840 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:31.840 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:31.840 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:31.840 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:31.841 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:31.841 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.841 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.841 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:31.841 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:31.841 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:31.841 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:31.841 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:31.841 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:31.841 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:31.841 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.841 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:31.841 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:31.841 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:31.841 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:32.141 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:32.141 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:32.141 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:32.141 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:32.141 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:32.141 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:32.141 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:32.141 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:32.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:32.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:08:32.141 00:08:32.141 --- 10.0.0.2 ping statistics --- 00:08:32.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.141 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:08:32.141 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:32.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:32.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:08:32.141 00:08:32.141 --- 10.0.0.1 ping statistics --- 00:08:32.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.141 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:08:32.141 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:32.141 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:32.141 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:32.141 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:32.141 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:32.141 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:32.141 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:32.141 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:32.141 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:32.141 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:32.141 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:32.141 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:32.141 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:32.141 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2854547 00:08:32.141 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:32.141 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2854547 00:08:32.141 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2854547 ']' 00:08:32.141 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.141 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:32.141 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.141 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:32.141 11:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:32.141 [2024-11-18 11:36:57.956000] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:32.141 [2024-11-18 11:36:57.956145] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.417 [2024-11-18 11:36:58.113634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:32.417 [2024-11-18 11:36:58.256349] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.417 [2024-11-18 11:36:58.256417] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.417 [2024-11-18 11:36:58.256444] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.417 [2024-11-18 11:36:58.256469] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.417 [2024-11-18 11:36:58.256498] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.417 [2024-11-18 11:36:58.259228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.417 [2024-11-18 11:36:58.259292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.417 [2024-11-18 11:36:58.259297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:33.352 11:36:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:33.352 11:36:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:33.352 11:36:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:33.352 11:36:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:33.352 11:36:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:33.352 11:36:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.352 11:36:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:33.611 [2024-11-18 11:36:59.263736] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.611 11:36:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:33.869 11:36:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:33.869 11:36:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:34.127 11:36:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:34.127 11:36:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:34.694 11:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:34.952 11:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=2773b1a9-268b-43e1-a58e-cd4727b9380c 00:08:34.952 11:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2773b1a9-268b-43e1-a58e-cd4727b9380c lvol 20 00:08:35.210 11:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=bbd8a5fc-cce4-4189-8055-f70a33b1f5a2 00:08:35.210 11:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:35.469 11:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bbd8a5fc-cce4-4189-8055-f70a33b1f5a2 00:08:35.727 11:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:35.984 [2024-11-18 11:37:01.703258] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:35.984 11:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:36.242 11:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2855115 00:08:36.242 11:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:36.242 11:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:37.184 11:37:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot bbd8a5fc-cce4-4189-8055-f70a33b1f5a2 MY_SNAPSHOT 00:08:37.755 11:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=566434c4-714d-4d51-9f2a-17ef7c94b35d 00:08:37.755 11:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize bbd8a5fc-cce4-4189-8055-f70a33b1f5a2 30 00:08:38.015 11:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 566434c4-714d-4d51-9f2a-17ef7c94b35d MY_CLONE 00:08:38.273 11:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c6022cc5-0df5-46c2-9b85-7e9087e3c737 00:08:38.273 11:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate c6022cc5-0df5-46c2-9b85-7e9087e3c737 00:08:39.211 11:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2855115 00:08:47.342 Initializing NVMe Controllers 00:08:47.342 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:47.342 Controller IO queue size 128, less than required. 00:08:47.342 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:47.342 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:47.342 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:47.342 Initialization complete. Launching workers. 00:08:47.342 ======================================================== 00:08:47.342 Latency(us) 00:08:47.342 Device Information : IOPS MiB/s Average min max 00:08:47.342 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8176.80 31.94 15667.02 385.32 146316.80 00:08:47.342 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 7980.40 31.17 16052.30 3333.37 160776.48 00:08:47.342 ======================================================== 00:08:47.342 Total : 16157.20 63.11 15857.32 385.32 160776.48 00:08:47.342 00:08:47.342 11:37:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:47.342 11:37:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bbd8a5fc-cce4-4189-8055-f70a33b1f5a2 00:08:47.342 11:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2773b1a9-268b-43e1-a58e-cd4727b9380c 00:08:47.601 11:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:47.601 11:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:47.601 11:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:47.601 11:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:47.601 11:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:47.601 11:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:47.601 11:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:47.601 11:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:47.601 11:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:47.601 rmmod nvme_tcp 00:08:47.601 rmmod nvme_fabrics 00:08:47.601 rmmod nvme_keyring 00:08:47.601 11:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:47.601 11:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:47.601 11:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:47.601 11:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2854547 ']' 00:08:47.601 11:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2854547 00:08:47.601 11:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2854547 ']' 00:08:47.601 11:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2854547 00:08:47.601 11:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:47.601 11:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:47.601 11:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2854547 00:08:47.601 11:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:47.601 11:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:47.601 11:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2854547' 00:08:47.601 killing process with pid 2854547 00:08:47.601 11:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2854547 00:08:47.601 11:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2854547 00:08:48.981 11:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:48.981 11:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:48.981 11:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:48.981 11:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:48.981 11:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:48.981 11:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:48.981 11:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:49.241 11:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:49.241 11:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:49.241 11:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.241 11:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.241 11:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.148 11:37:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:51.148 00:08:51.148 real 0m21.347s 00:08:51.148 user 1m11.593s 00:08:51.148 sys 0m5.460s 00:08:51.148 11:37:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.148 11:37:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:51.148 ************************************ 00:08:51.148 END TEST nvmf_lvol 00:08:51.148 ************************************ 00:08:51.148 11:37:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:51.148 11:37:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:51.148 11:37:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.148 11:37:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:51.148 ************************************ 00:08:51.148 START TEST nvmf_lvs_grow 00:08:51.148 ************************************ 00:08:51.148 11:37:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:51.148 * Looking for test storage... 00:08:51.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:51.148 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:51.148 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:08:51.148 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:51.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.407 --rc genhtml_branch_coverage=1 00:08:51.407 --rc genhtml_function_coverage=1 00:08:51.407 --rc genhtml_legend=1 00:08:51.407 --rc geninfo_all_blocks=1 00:08:51.407 --rc geninfo_unexecuted_blocks=1 00:08:51.407 00:08:51.407 ' 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:51.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.407 --rc genhtml_branch_coverage=1 00:08:51.407 --rc genhtml_function_coverage=1 00:08:51.407 --rc genhtml_legend=1 00:08:51.407 --rc geninfo_all_blocks=1 00:08:51.407 --rc geninfo_unexecuted_blocks=1 00:08:51.407 00:08:51.407 ' 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:51.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.407 --rc genhtml_branch_coverage=1 00:08:51.407 --rc genhtml_function_coverage=1 00:08:51.407 --rc genhtml_legend=1 00:08:51.407 --rc geninfo_all_blocks=1 00:08:51.407 --rc geninfo_unexecuted_blocks=1 00:08:51.407 00:08:51.407 ' 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:51.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.407 --rc genhtml_branch_coverage=1 00:08:51.407 --rc genhtml_function_coverage=1 00:08:51.407 --rc genhtml_legend=1 00:08:51.407 --rc geninfo_all_blocks=1 00:08:51.407 --rc geninfo_unexecuted_blocks=1 00:08:51.407 00:08:51.407 ' 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.407 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.408 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.408 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.408 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.408 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:51.408 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.408 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:51.408 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:51.408 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:51.408 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:51.408 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.408 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.408 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:51.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:51.408 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:51.408 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:51.408 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:51.408 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:51.408 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:51.408 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:51.408 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:51.408 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:51.408 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:51.408 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:51.408 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:51.408 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.408 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.408 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.408 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:51.408 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:51.408 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:51.408 11:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:53.316 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:53.316 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:53.316 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:53.316 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:53.316 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:53.316 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:53.316 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:53.316 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:53.316 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:53.316 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:53.316 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:53.316 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:53.316 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:53.316 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:53.316 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:53.316 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:53.316 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:53.316 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:53.316 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:53.317 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:53.317 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:53.317 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:53.317 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:53.317 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:53.576 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:53.576 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:53.576 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:53.576 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:53.576 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:53.576 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:53.576 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:53.576 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:53.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:53.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:08:53.576 00:08:53.576 --- 10.0.0.2 ping statistics --- 00:08:53.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.576 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:08:53.576 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:53.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:53.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:08:53.576 00:08:53.576 --- 10.0.0.1 ping statistics --- 00:08:53.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.576 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:08:53.576 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:53.576 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:53.576 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:53.576 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:53.576 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:53.576 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:53.576 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:53.576 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:53.576 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:53.576 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:53.576 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:53.576 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:53.576 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:53.576 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2858546 00:08:53.576 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:53.576 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2858546 00:08:53.576 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2858546 ']' 00:08:53.576 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.576 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.576 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.576 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.576 11:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:53.576 [2024-11-18 11:37:19.388185] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:53.576 [2024-11-18 11:37:19.388344] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.836 [2024-11-18 11:37:19.544975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.836 [2024-11-18 11:37:19.681984] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:53.836 [2024-11-18 11:37:19.682088] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:53.836 [2024-11-18 11:37:19.682114] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:53.836 [2024-11-18 11:37:19.682139] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:53.836 [2024-11-18 11:37:19.682164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:53.836 [2024-11-18 11:37:19.683856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.772 11:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:54.772 11:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:54.772 11:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:54.772 11:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:54.772 11:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:54.772 11:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.772 11:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:55.030 [2024-11-18 11:37:20.694429] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:55.030 11:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:55.030 11:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:55.030 11:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.030 11:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:55.030 ************************************ 00:08:55.030 START TEST lvs_grow_clean 00:08:55.030 ************************************ 00:08:55.030 11:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:55.030 11:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:55.030 11:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:55.030 11:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:55.030 11:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:55.030 11:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:55.030 11:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:55.030 11:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:55.030 11:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:55.030 11:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:55.290 11:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:55.290 11:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:55.550 11:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=990dae60-e818-42fe-a7fd-5b5b6ce73f30 00:08:55.550 11:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 990dae60-e818-42fe-a7fd-5b5b6ce73f30 00:08:55.550 11:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:55.810 11:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:55.810 11:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:55.810 11:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 990dae60-e818-42fe-a7fd-5b5b6ce73f30 lvol 150 00:08:56.069 11:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=9c1ed42a-fcc4-4ea3-a140-0b4f83f16d7a 00:08:56.069 11:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:56.069 11:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:56.340 [2024-11-18 11:37:22.184586] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:56.340 [2024-11-18 11:37:22.184721] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:56.340 true 00:08:56.340 11:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 990dae60-e818-42fe-a7fd-5b5b6ce73f30 00:08:56.340 11:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:56.599 11:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:56.599 11:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:57.166 11:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9c1ed42a-fcc4-4ea3-a140-0b4f83f16d7a 00:08:57.166 11:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:57.735 [2024-11-18 11:37:23.324261] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:57.735 11:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:57.994 11:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2859122 00:08:57.994 11:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:57.994 11:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:57.994 11:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2859122 /var/tmp/bdevperf.sock 00:08:57.994 11:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2859122 ']' 00:08:57.994 11:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:57.994 11:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:57.994 11:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:57.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:57.994 11:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:57.994 11:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:57.994 [2024-11-18 11:37:23.702262] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:57.994 [2024-11-18 11:37:23.702393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2859122 ] 00:08:57.994 [2024-11-18 11:37:23.846917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.254 [2024-11-18 11:37:23.983264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.194 11:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:59.194 11:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:59.194 11:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:59.194 Nvme0n1 00:08:59.194 11:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:59.455 [ 00:08:59.455 { 00:08:59.455 "name": "Nvme0n1", 00:08:59.455 "aliases": [ 00:08:59.455 "9c1ed42a-fcc4-4ea3-a140-0b4f83f16d7a" 00:08:59.455 ], 00:08:59.455 "product_name": "NVMe disk", 00:08:59.455 "block_size": 4096, 00:08:59.455 "num_blocks": 38912, 00:08:59.455 "uuid": "9c1ed42a-fcc4-4ea3-a140-0b4f83f16d7a", 00:08:59.455 "numa_id": 0, 00:08:59.455 "assigned_rate_limits": { 00:08:59.455 "rw_ios_per_sec": 0, 00:08:59.455 "rw_mbytes_per_sec": 0, 00:08:59.455 "r_mbytes_per_sec": 0, 00:08:59.455 "w_mbytes_per_sec": 0 00:08:59.455 }, 00:08:59.455 "claimed": false, 00:08:59.455 "zoned": false, 00:08:59.455 "supported_io_types": { 00:08:59.455 "read": true, 00:08:59.455 "write": true, 00:08:59.455 "unmap": true, 00:08:59.455 "flush": true, 00:08:59.455 "reset": true, 00:08:59.455 "nvme_admin": true, 00:08:59.455 "nvme_io": true, 00:08:59.455 "nvme_io_md": false, 00:08:59.455 "write_zeroes": true, 00:08:59.455 "zcopy": false, 00:08:59.455 "get_zone_info": false, 00:08:59.455 "zone_management": false, 00:08:59.455 "zone_append": false, 00:08:59.455 "compare": true, 00:08:59.455 "compare_and_write": true, 00:08:59.455 "abort": true, 00:08:59.455 "seek_hole": false, 00:08:59.455 "seek_data": false, 00:08:59.455 "copy": true, 00:08:59.455 "nvme_iov_md": false 00:08:59.455 }, 00:08:59.455 "memory_domains": [ 00:08:59.455 { 00:08:59.455 "dma_device_id": "system", 00:08:59.455 "dma_device_type": 1 00:08:59.455 } 00:08:59.455 ], 00:08:59.455 "driver_specific": { 00:08:59.455 "nvme": [ 00:08:59.455 { 00:08:59.455 "trid": { 00:08:59.455 "trtype": "TCP", 00:08:59.455 "adrfam": "IPv4", 00:08:59.455 "traddr": "10.0.0.2", 00:08:59.455 "trsvcid": "4420", 00:08:59.455 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:59.455 }, 00:08:59.455 "ctrlr_data": { 00:08:59.455 "cntlid": 1, 00:08:59.455 "vendor_id": "0x8086", 00:08:59.455 "model_number": "SPDK bdev Controller", 00:08:59.455 "serial_number": "SPDK0", 00:08:59.455 "firmware_revision": "25.01", 00:08:59.455 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:59.455 "oacs": { 00:08:59.455 "security": 0, 00:08:59.455 "format": 0, 00:08:59.455 "firmware": 0, 00:08:59.455 "ns_manage": 0 00:08:59.455 }, 00:08:59.455 "multi_ctrlr": true, 00:08:59.455 "ana_reporting": false 00:08:59.455 }, 00:08:59.455 "vs": { 00:08:59.455 "nvme_version": "1.3" 00:08:59.455 }, 00:08:59.455 "ns_data": { 00:08:59.455 "id": 1, 00:08:59.455 "can_share": true 00:08:59.455 } 00:08:59.455 } 00:08:59.455 ], 00:08:59.455 "mp_policy": "active_passive" 00:08:59.455 } 00:08:59.455 } 00:08:59.455 ] 00:08:59.716 11:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2859377 00:08:59.716 11:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:59.716 11:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:59.716 Running I/O for 10 seconds... 00:09:00.656 Latency(us) 00:09:00.656 [2024-11-18T10:37:26.541Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:00.656 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.656 Nvme0n1 : 1.00 10703.00 41.81 0.00 0.00 0.00 0.00 0.00 00:09:00.656 [2024-11-18T10:37:26.541Z] =================================================================================================================== 00:09:00.656 [2024-11-18T10:37:26.541Z] Total : 10703.00 41.81 0.00 0.00 0.00 0.00 0.00 00:09:00.656 00:09:01.597 11:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 990dae60-e818-42fe-a7fd-5b5b6ce73f30 00:09:01.597 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.597 Nvme0n1 : 2.00 10749.00 41.99 0.00 0.00 0.00 0.00 0.00 00:09:01.597 [2024-11-18T10:37:27.482Z] =================================================================================================================== 00:09:01.597 [2024-11-18T10:37:27.482Z] Total : 10749.00 41.99 0.00 0.00 0.00 0.00 0.00 00:09:01.597 00:09:01.857 true 00:09:01.857 11:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 990dae60-e818-42fe-a7fd-5b5b6ce73f30 00:09:01.857 11:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:02.138 11:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:02.138 11:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:02.138 11:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2859377 00:09:02.752 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.752 Nvme0n1 : 3.00 10806.67 42.21 0.00 0.00 0.00 0.00 0.00 00:09:02.752 [2024-11-18T10:37:28.637Z] =================================================================================================================== 00:09:02.752 [2024-11-18T10:37:28.637Z] Total : 10806.67 42.21 0.00 0.00 0.00 0.00 0.00 00:09:02.752 00:09:03.693 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.693 Nvme0n1 : 4.00 10835.50 42.33 0.00 0.00 0.00 0.00 0.00 00:09:03.693 [2024-11-18T10:37:29.578Z] =================================================================================================================== 00:09:03.693 [2024-11-18T10:37:29.578Z] Total : 10835.50 42.33 0.00 0.00 0.00 0.00 0.00 00:09:03.693 00:09:04.636 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.636 Nvme0n1 : 5.00 10852.80 42.39 0.00 0.00 0.00 0.00 0.00 00:09:04.636 [2024-11-18T10:37:30.521Z] =================================================================================================================== 00:09:04.636 [2024-11-18T10:37:30.521Z] Total : 10852.80 42.39 0.00 0.00 0.00 0.00 0.00 00:09:04.636 00:09:06.015 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.015 Nvme0n1 : 6.00 10875.00 42.48 0.00 0.00 0.00 0.00 0.00 00:09:06.015 [2024-11-18T10:37:31.900Z] =================================================================================================================== 00:09:06.015 [2024-11-18T10:37:31.900Z] Total : 10875.00 42.48 0.00 0.00 0.00 0.00 0.00 00:09:06.015 00:09:06.954 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.954 Nvme0n1 : 7.00 10895.57 42.56 0.00 0.00 0.00 0.00 0.00 00:09:06.954 [2024-11-18T10:37:32.839Z] =================================================================================================================== 00:09:06.954 [2024-11-18T10:37:32.839Z] Total : 10895.57 42.56 0.00 0.00 0.00 0.00 0.00 00:09:06.954 00:09:07.894 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.894 Nvme0n1 : 8.00 10903.12 42.59 0.00 0.00 0.00 0.00 0.00 00:09:07.894 [2024-11-18T10:37:33.779Z] =================================================================================================================== 00:09:07.894 [2024-11-18T10:37:33.779Z] Total : 10903.12 42.59 0.00 0.00 0.00 0.00 0.00 00:09:07.894 00:09:08.830 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.830 Nvme0n1 : 9.00 10919.33 42.65 0.00 0.00 0.00 0.00 0.00 00:09:08.830 [2024-11-18T10:37:34.715Z] =================================================================================================================== 00:09:08.830 [2024-11-18T10:37:34.715Z] Total : 10919.33 42.65 0.00 0.00 0.00 0.00 0.00 00:09:08.830 00:09:09.767 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:09.767 Nvme0n1 : 10.00 10926.00 42.68 0.00 0.00 0.00 0.00 0.00 00:09:09.767 [2024-11-18T10:37:35.652Z] =================================================================================================================== 00:09:09.767 [2024-11-18T10:37:35.652Z] Total : 10926.00 42.68 0.00 0.00 0.00 0.00 0.00 00:09:09.767 00:09:09.767 00:09:09.767 Latency(us) 00:09:09.767 [2024-11-18T10:37:35.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:09.767 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:09.767 Nvme0n1 : 10.01 10930.89 42.70 0.00 0.00 11701.48 2815.62 22816.24 00:09:09.767 [2024-11-18T10:37:35.652Z] =================================================================================================================== 00:09:09.767 [2024-11-18T10:37:35.652Z] Total : 10930.89 42.70 0.00 0.00 11701.48 2815.62 22816.24 00:09:09.767 { 00:09:09.767 "results": [ 00:09:09.767 { 00:09:09.767 "job": "Nvme0n1", 00:09:09.767 "core_mask": "0x2", 00:09:09.767 "workload": "randwrite", 00:09:09.767 "status": "finished", 00:09:09.767 "queue_depth": 128, 00:09:09.767 "io_size": 4096, 00:09:09.767 "runtime": 10.007239, 00:09:09.767 "iops": 10930.88713080601, 00:09:09.767 "mibps": 42.698777854710976, 00:09:09.767 "io_failed": 0, 00:09:09.767 "io_timeout": 0, 00:09:09.767 "avg_latency_us": 11701.48092956232, 00:09:09.767 "min_latency_us": 2815.6207407407405, 00:09:09.767 "max_latency_us": 22816.237037037037 00:09:09.767 } 00:09:09.767 ], 00:09:09.767 "core_count": 1 00:09:09.767 } 00:09:09.767 11:37:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2859122 00:09:09.767 11:37:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2859122 ']' 00:09:09.767 11:37:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2859122 00:09:09.767 11:37:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:09.767 11:37:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:09.767 11:37:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2859122 00:09:09.767 11:37:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:09.767 11:37:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:09.767 11:37:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2859122' 00:09:09.767 killing process with pid 2859122 00:09:09.767 11:37:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2859122 00:09:09.767 Received shutdown signal, test time was about 10.000000 seconds 00:09:09.767 00:09:09.767 Latency(us) 00:09:09.767 [2024-11-18T10:37:35.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:09.767 [2024-11-18T10:37:35.652Z] =================================================================================================================== 00:09:09.767 [2024-11-18T10:37:35.652Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:09.767 11:37:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2859122 00:09:10.704 11:37:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:10.961 11:37:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:11.219 11:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 990dae60-e818-42fe-a7fd-5b5b6ce73f30 00:09:11.219 11:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:11.787 11:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:11.787 11:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:11.787 11:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:11.787 [2024-11-18 11:37:37.667864] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:12.048 11:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 990dae60-e818-42fe-a7fd-5b5b6ce73f30 00:09:12.048 11:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:12.048 11:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 990dae60-e818-42fe-a7fd-5b5b6ce73f30 00:09:12.048 11:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:12.048 11:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:12.048 11:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:12.048 11:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:12.048 11:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:12.048 11:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:12.048 11:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:12.048 11:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:12.048 11:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 990dae60-e818-42fe-a7fd-5b5b6ce73f30 00:09:12.308 request: 00:09:12.308 { 00:09:12.308 "uuid": "990dae60-e818-42fe-a7fd-5b5b6ce73f30", 00:09:12.308 "method": "bdev_lvol_get_lvstores", 00:09:12.308 "req_id": 1 00:09:12.308 } 00:09:12.308 Got JSON-RPC error response 00:09:12.308 response: 00:09:12.308 { 00:09:12.308 "code": -19, 00:09:12.308 "message": "No such device" 00:09:12.308 } 00:09:12.308 11:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:12.308 11:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:12.308 11:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:12.308 11:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:12.308 11:37:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:12.566 aio_bdev 00:09:12.566 11:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9c1ed42a-fcc4-4ea3-a140-0b4f83f16d7a 00:09:12.566 11:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=9c1ed42a-fcc4-4ea3-a140-0b4f83f16d7a 00:09:12.566 11:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:12.566 11:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:12.566 11:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:12.566 11:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:12.566 11:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:12.824 11:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9c1ed42a-fcc4-4ea3-a140-0b4f83f16d7a -t 2000 00:09:13.084 [ 00:09:13.084 { 00:09:13.084 "name": "9c1ed42a-fcc4-4ea3-a140-0b4f83f16d7a", 00:09:13.084 "aliases": [ 00:09:13.084 "lvs/lvol" 00:09:13.084 ], 00:09:13.084 "product_name": "Logical Volume", 00:09:13.084 "block_size": 4096, 00:09:13.084 "num_blocks": 38912, 00:09:13.084 "uuid": "9c1ed42a-fcc4-4ea3-a140-0b4f83f16d7a", 00:09:13.084 "assigned_rate_limits": { 00:09:13.084 "rw_ios_per_sec": 0, 00:09:13.084 "rw_mbytes_per_sec": 0, 00:09:13.084 "r_mbytes_per_sec": 0, 00:09:13.084 "w_mbytes_per_sec": 0 00:09:13.084 }, 00:09:13.084 "claimed": false, 00:09:13.084 "zoned": false, 00:09:13.084 "supported_io_types": { 00:09:13.084 "read": true, 00:09:13.084 "write": true, 00:09:13.084 "unmap": true, 00:09:13.084 "flush": false, 00:09:13.084 "reset": true, 00:09:13.084 "nvme_admin": false, 00:09:13.084 "nvme_io": false, 00:09:13.084 "nvme_io_md": false, 00:09:13.084 "write_zeroes": true, 00:09:13.084 "zcopy": false, 00:09:13.084 "get_zone_info": false, 00:09:13.084 "zone_management": false, 00:09:13.084 "zone_append": false, 00:09:13.084 "compare": false, 00:09:13.084 "compare_and_write": false, 00:09:13.084 "abort": false, 00:09:13.084 "seek_hole": true, 00:09:13.084 "seek_data": true, 00:09:13.084 "copy": false, 00:09:13.084 "nvme_iov_md": false 00:09:13.084 }, 00:09:13.084 "driver_specific": { 00:09:13.084 "lvol": { 00:09:13.084 "lvol_store_uuid": "990dae60-e818-42fe-a7fd-5b5b6ce73f30", 00:09:13.084 "base_bdev": "aio_bdev", 00:09:13.084 "thin_provision": false, 00:09:13.084 "num_allocated_clusters": 38, 00:09:13.084 "snapshot": false, 00:09:13.084 "clone": false, 00:09:13.084 "esnap_clone": false 00:09:13.084 } 00:09:13.084 } 00:09:13.084 } 00:09:13.084 ] 00:09:13.084 11:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:13.084 11:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 990dae60-e818-42fe-a7fd-5b5b6ce73f30 00:09:13.084 11:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:13.344 11:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:13.344 11:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 990dae60-e818-42fe-a7fd-5b5b6ce73f30 00:09:13.344 11:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:13.604 11:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:13.604 11:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9c1ed42a-fcc4-4ea3-a140-0b4f83f16d7a 00:09:13.863 11:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 990dae60-e818-42fe-a7fd-5b5b6ce73f30 00:09:14.123 11:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:14.383 11:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:14.641 00:09:14.641 real 0m19.530s 00:09:14.641 user 0m19.349s 00:09:14.641 sys 0m1.938s 00:09:14.641 11:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.641 11:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:14.641 ************************************ 00:09:14.641 END TEST lvs_grow_clean 00:09:14.641 ************************************ 00:09:14.641 11:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:14.641 11:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:14.641 11:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.641 11:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:14.641 ************************************ 00:09:14.641 START TEST lvs_grow_dirty 00:09:14.641 ************************************ 00:09:14.641 11:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:14.641 11:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:14.641 11:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:14.642 11:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:14.642 11:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:14.642 11:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:14.642 11:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:14.642 11:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:14.642 11:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:14.642 11:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:14.901 11:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:14.901 11:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:15.160 11:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=299b443c-60a9-408f-8887-92939377293d 00:09:15.160 11:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 299b443c-60a9-408f-8887-92939377293d 00:09:15.160 11:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:15.420 11:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:15.420 11:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:15.420 11:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 299b443c-60a9-408f-8887-92939377293d lvol 150 00:09:15.680 11:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e9fdb86c-4426-4481-9cec-f93b87754eed 00:09:15.680 11:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:15.680 11:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:15.938 [2024-11-18 11:37:41.781558] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:15.938 [2024-11-18 11:37:41.781687] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:15.938 true 00:09:15.938 11:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 299b443c-60a9-408f-8887-92939377293d 00:09:15.938 11:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:16.196 11:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:16.196 11:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:16.765 11:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e9fdb86c-4426-4481-9cec-f93b87754eed 00:09:17.025 11:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:17.284 [2024-11-18 11:37:42.937377] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:17.284 11:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:17.543 11:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2861533 00:09:17.543 11:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:17.543 11:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:17.543 11:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2861533 /var/tmp/bdevperf.sock 00:09:17.543 11:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2861533 ']' 00:09:17.543 11:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:17.543 11:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.543 11:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:17.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:17.543 11:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.543 11:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:17.543 [2024-11-18 11:37:43.310999] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:17.543 [2024-11-18 11:37:43.311128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2861533 ] 00:09:17.801 [2024-11-18 11:37:43.451999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.801 [2024-11-18 11:37:43.588252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:18.735 11:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:18.735 11:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:18.735 11:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:18.993 Nvme0n1 00:09:18.993 11:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:19.251 [ 00:09:19.251 { 00:09:19.251 "name": "Nvme0n1", 00:09:19.251 "aliases": [ 00:09:19.251 "e9fdb86c-4426-4481-9cec-f93b87754eed" 00:09:19.251 ], 00:09:19.251 "product_name": "NVMe disk", 00:09:19.251 "block_size": 4096, 00:09:19.251 "num_blocks": 38912, 00:09:19.251 "uuid": "e9fdb86c-4426-4481-9cec-f93b87754eed", 00:09:19.251 "numa_id": 0, 00:09:19.251 "assigned_rate_limits": { 00:09:19.251 "rw_ios_per_sec": 0, 00:09:19.251 "rw_mbytes_per_sec": 0, 00:09:19.251 "r_mbytes_per_sec": 0, 00:09:19.251 "w_mbytes_per_sec": 0 00:09:19.251 }, 00:09:19.251 "claimed": false, 00:09:19.251 "zoned": false, 00:09:19.251 "supported_io_types": { 00:09:19.251 "read": true, 00:09:19.251 "write": true, 00:09:19.251 "unmap": true, 00:09:19.251 "flush": true, 00:09:19.251 "reset": true, 00:09:19.251 "nvme_admin": true, 00:09:19.251 "nvme_io": true, 00:09:19.251 "nvme_io_md": false, 00:09:19.251 "write_zeroes": true, 00:09:19.251 "zcopy": false, 00:09:19.251 "get_zone_info": false, 00:09:19.251 "zone_management": false, 00:09:19.251 "zone_append": false, 00:09:19.251 "compare": true, 00:09:19.251 "compare_and_write": true, 00:09:19.251 "abort": true, 00:09:19.251 "seek_hole": false, 00:09:19.251 "seek_data": false, 00:09:19.251 "copy": true, 00:09:19.251 "nvme_iov_md": false 00:09:19.251 }, 00:09:19.251 "memory_domains": [ 00:09:19.251 { 00:09:19.251 "dma_device_id": "system", 00:09:19.251 "dma_device_type": 1 00:09:19.251 } 00:09:19.251 ], 00:09:19.251 "driver_specific": { 00:09:19.251 "nvme": [ 00:09:19.251 { 00:09:19.251 "trid": { 00:09:19.251 "trtype": "TCP", 00:09:19.251 "adrfam": "IPv4", 00:09:19.251 "traddr": "10.0.0.2", 00:09:19.251 "trsvcid": "4420", 00:09:19.251 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:19.251 }, 00:09:19.251 "ctrlr_data": { 00:09:19.251 "cntlid": 1, 00:09:19.251 "vendor_id": "0x8086", 00:09:19.251 "model_number": "SPDK bdev Controller", 00:09:19.251 "serial_number": "SPDK0", 00:09:19.251 "firmware_revision": "25.01", 00:09:19.251 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:19.251 "oacs": { 00:09:19.251 "security": 0, 00:09:19.251 "format": 0, 00:09:19.251 "firmware": 0, 00:09:19.251 "ns_manage": 0 00:09:19.251 }, 00:09:19.251 "multi_ctrlr": true, 00:09:19.251 "ana_reporting": false 00:09:19.251 }, 00:09:19.251 "vs": { 00:09:19.251 "nvme_version": "1.3" 00:09:19.251 }, 00:09:19.251 "ns_data": { 00:09:19.251 "id": 1, 00:09:19.251 "can_share": true 00:09:19.251 } 00:09:19.251 } 00:09:19.251 ], 00:09:19.251 "mp_policy": "active_passive" 00:09:19.251 } 00:09:19.251 } 00:09:19.251 ] 00:09:19.251 11:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2861712 00:09:19.251 11:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:19.251 11:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:19.509 Running I/O for 10 seconds... 00:09:20.444 Latency(us) 00:09:20.444 [2024-11-18T10:37:46.329Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:20.444 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.444 Nvme0n1 : 1.00 10669.00 41.68 0.00 0.00 0.00 0.00 0.00 00:09:20.444 [2024-11-18T10:37:46.329Z] =================================================================================================================== 00:09:20.444 [2024-11-18T10:37:46.329Z] Total : 10669.00 41.68 0.00 0.00 0.00 0.00 0.00 00:09:20.444 00:09:21.379 11:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 299b443c-60a9-408f-8887-92939377293d 00:09:21.379 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.379 Nvme0n1 : 2.00 10732.00 41.92 0.00 0.00 0.00 0.00 0.00 00:09:21.379 [2024-11-18T10:37:47.264Z] =================================================================================================================== 00:09:21.379 [2024-11-18T10:37:47.264Z] Total : 10732.00 41.92 0.00 0.00 0.00 0.00 0.00 00:09:21.379 00:09:21.637 true 00:09:21.637 11:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 299b443c-60a9-408f-8887-92939377293d 00:09:21.637 11:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:21.895 11:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:21.895 11:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:21.895 11:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2861712 00:09:22.461 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.461 Nvme0n1 : 3.00 10753.00 42.00 0.00 0.00 0.00 0.00 0.00 00:09:22.461 [2024-11-18T10:37:48.346Z] =================================================================================================================== 00:09:22.461 [2024-11-18T10:37:48.346Z] Total : 10753.00 42.00 0.00 0.00 0.00 0.00 0.00 00:09:22.461 00:09:23.397 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.397 Nvme0n1 : 4.00 10803.75 42.20 0.00 0.00 0.00 0.00 0.00 00:09:23.397 [2024-11-18T10:37:49.282Z] =================================================================================================================== 00:09:23.397 [2024-11-18T10:37:49.282Z] Total : 10803.75 42.20 0.00 0.00 0.00 0.00 0.00 00:09:23.397 00:09:24.779 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.779 Nvme0n1 : 5.00 10827.40 42.29 0.00 0.00 0.00 0.00 0.00 00:09:24.779 [2024-11-18T10:37:50.664Z] =================================================================================================================== 00:09:24.779 [2024-11-18T10:37:50.664Z] Total : 10827.40 42.29 0.00 0.00 0.00 0.00 0.00 00:09:24.779 00:09:25.390 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.390 Nvme0n1 : 6.00 10822.00 42.27 0.00 0.00 0.00 0.00 0.00 00:09:25.390 [2024-11-18T10:37:51.275Z] =================================================================================================================== 00:09:25.390 [2024-11-18T10:37:51.275Z] Total : 10822.00 42.27 0.00 0.00 0.00 0.00 0.00 00:09:25.390 00:09:26.770 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.770 Nvme0n1 : 7.00 10836.29 42.33 0.00 0.00 0.00 0.00 0.00 00:09:26.770 [2024-11-18T10:37:52.655Z] =================================================================================================================== 00:09:26.770 [2024-11-18T10:37:52.655Z] Total : 10836.29 42.33 0.00 0.00 0.00 0.00 0.00 00:09:26.770 00:09:27.711 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.711 Nvme0n1 : 8.00 10862.88 42.43 0.00 0.00 0.00 0.00 0.00 00:09:27.711 [2024-11-18T10:37:53.596Z] =================================================================================================================== 00:09:27.711 [2024-11-18T10:37:53.596Z] Total : 10862.88 42.43 0.00 0.00 0.00 0.00 0.00 00:09:27.711 00:09:28.650 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.650 Nvme0n1 : 9.00 10897.67 42.57 0.00 0.00 0.00 0.00 0.00 00:09:28.650 [2024-11-18T10:37:54.535Z] =================================================================================================================== 00:09:28.650 [2024-11-18T10:37:54.535Z] Total : 10897.67 42.57 0.00 0.00 0.00 0.00 0.00 00:09:28.650 00:09:29.641 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.641 Nvme0n1 : 10.00 10912.80 42.63 0.00 0.00 0.00 0.00 0.00 00:09:29.641 [2024-11-18T10:37:55.526Z] =================================================================================================================== 00:09:29.641 [2024-11-18T10:37:55.526Z] Total : 10912.80 42.63 0.00 0.00 0.00 0.00 0.00 00:09:29.641 00:09:29.641 00:09:29.641 Latency(us) 00:09:29.641 [2024-11-18T10:37:55.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.641 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.641 Nvme0n1 : 10.01 10918.81 42.65 0.00 0.00 11716.18 2803.48 23107.51 00:09:29.641 [2024-11-18T10:37:55.526Z] =================================================================================================================== 00:09:29.641 [2024-11-18T10:37:55.526Z] Total : 10918.81 42.65 0.00 0.00 11716.18 2803.48 23107.51 00:09:29.641 { 00:09:29.641 "results": [ 00:09:29.641 { 00:09:29.641 "job": "Nvme0n1", 00:09:29.641 "core_mask": "0x2", 00:09:29.641 "workload": "randwrite", 00:09:29.641 "status": "finished", 00:09:29.641 "queue_depth": 128, 00:09:29.641 "io_size": 4096, 00:09:29.641 "runtime": 10.006216, 00:09:29.641 "iops": 10918.812865922542, 00:09:29.641 "mibps": 42.65161275750993, 00:09:29.641 "io_failed": 0, 00:09:29.641 "io_timeout": 0, 00:09:29.641 "avg_latency_us": 11716.183397362363, 00:09:29.641 "min_latency_us": 2803.4844444444443, 00:09:29.641 "max_latency_us": 23107.508148148147 00:09:29.641 } 00:09:29.641 ], 00:09:29.641 "core_count": 1 00:09:29.641 } 00:09:29.641 11:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2861533 00:09:29.641 11:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2861533 ']' 00:09:29.641 11:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2861533 00:09:29.641 11:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:29.641 11:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:29.641 11:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2861533 00:09:29.641 11:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:29.641 11:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:29.641 11:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2861533' 00:09:29.641 killing process with pid 2861533 00:09:29.641 11:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2861533 00:09:29.641 Received shutdown signal, test time was about 10.000000 seconds 00:09:29.641 00:09:29.641 Latency(us) 00:09:29.641 [2024-11-18T10:37:55.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.641 [2024-11-18T10:37:55.526Z] =================================================================================================================== 00:09:29.641 [2024-11-18T10:37:55.526Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:29.641 11:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2861533 00:09:30.578 11:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:30.836 11:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:31.094 11:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 299b443c-60a9-408f-8887-92939377293d 00:09:31.094 11:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:31.353 11:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:31.353 11:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:31.353 11:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2858546 00:09:31.353 11:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2858546 00:09:31.353 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2858546 Killed "${NVMF_APP[@]}" "$@" 00:09:31.353 11:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:31.353 11:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:31.353 11:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:31.353 11:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:31.353 11:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:31.353 11:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2863178 00:09:31.353 11:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:31.353 11:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2863178 00:09:31.353 11:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2863178 ']' 00:09:31.353 11:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.353 11:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.353 11:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.353 11:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.353 11:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:31.619 [2024-11-18 11:37:57.273679] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:31.619 [2024-11-18 11:37:57.273835] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.619 [2024-11-18 11:37:57.425325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.877 [2024-11-18 11:37:57.561313] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.877 [2024-11-18 11:37:57.561413] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.877 [2024-11-18 11:37:57.561438] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:31.877 [2024-11-18 11:37:57.561465] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:31.877 [2024-11-18 11:37:57.561484] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.877 [2024-11-18 11:37:57.563132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.444 11:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.444 11:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:32.444 11:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:32.444 11:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:32.444 11:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:32.444 11:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.444 11:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:32.703 [2024-11-18 11:37:58.559113] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:32.703 [2024-11-18 11:37:58.559352] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:32.703 [2024-11-18 11:37:58.559437] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:32.703 11:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:32.703 11:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e9fdb86c-4426-4481-9cec-f93b87754eed 00:09:32.703 11:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=e9fdb86c-4426-4481-9cec-f93b87754eed 00:09:32.703 11:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:32.703 11:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:32.703 11:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:32.703 11:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:32.703 11:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:33.273 11:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e9fdb86c-4426-4481-9cec-f93b87754eed -t 2000 00:09:33.273 [ 00:09:33.273 { 00:09:33.273 "name": "e9fdb86c-4426-4481-9cec-f93b87754eed", 00:09:33.273 "aliases": [ 00:09:33.273 "lvs/lvol" 00:09:33.273 ], 00:09:33.273 "product_name": "Logical Volume", 00:09:33.273 "block_size": 4096, 00:09:33.273 "num_blocks": 38912, 00:09:33.273 "uuid": "e9fdb86c-4426-4481-9cec-f93b87754eed", 00:09:33.273 "assigned_rate_limits": { 00:09:33.273 "rw_ios_per_sec": 0, 00:09:33.273 "rw_mbytes_per_sec": 0, 00:09:33.273 "r_mbytes_per_sec": 0, 00:09:33.273 "w_mbytes_per_sec": 0 00:09:33.273 }, 00:09:33.273 "claimed": false, 00:09:33.273 "zoned": false, 00:09:33.273 "supported_io_types": { 00:09:33.273 "read": true, 00:09:33.273 "write": true, 00:09:33.273 "unmap": true, 00:09:33.273 "flush": false, 00:09:33.273 "reset": true, 00:09:33.273 "nvme_admin": false, 00:09:33.273 "nvme_io": false, 00:09:33.273 "nvme_io_md": false, 00:09:33.273 "write_zeroes": true, 00:09:33.273 "zcopy": false, 00:09:33.273 "get_zone_info": false, 00:09:33.273 "zone_management": false, 00:09:33.273 "zone_append": false, 00:09:33.273 "compare": false, 00:09:33.273 "compare_and_write": false, 00:09:33.273 "abort": false, 00:09:33.273 "seek_hole": true, 00:09:33.273 "seek_data": true, 00:09:33.273 "copy": false, 00:09:33.273 "nvme_iov_md": false 00:09:33.273 }, 00:09:33.273 "driver_specific": { 00:09:33.273 "lvol": { 00:09:33.273 "lvol_store_uuid": "299b443c-60a9-408f-8887-92939377293d", 00:09:33.273 "base_bdev": "aio_bdev", 00:09:33.273 "thin_provision": false, 00:09:33.273 "num_allocated_clusters": 38, 00:09:33.273 "snapshot": false, 00:09:33.273 "clone": false, 00:09:33.273 "esnap_clone": false 00:09:33.273 } 00:09:33.273 } 00:09:33.273 } 00:09:33.273 ] 00:09:33.273 11:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:33.273 11:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 299b443c-60a9-408f-8887-92939377293d 00:09:33.273 11:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:33.540 11:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:33.540 11:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 299b443c-60a9-408f-8887-92939377293d 00:09:33.540 11:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:33.799 11:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:33.799 11:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:34.059 [2024-11-18 11:37:59.936086] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:34.318 11:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 299b443c-60a9-408f-8887-92939377293d 00:09:34.318 11:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:34.318 11:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 299b443c-60a9-408f-8887-92939377293d 00:09:34.318 11:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:34.318 11:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.318 11:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:34.318 11:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.318 11:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:34.318 11:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.318 11:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:34.318 11:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:34.318 11:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 299b443c-60a9-408f-8887-92939377293d 00:09:34.578 request: 00:09:34.578 { 00:09:34.578 "uuid": "299b443c-60a9-408f-8887-92939377293d", 00:09:34.578 "method": "bdev_lvol_get_lvstores", 00:09:34.578 "req_id": 1 00:09:34.578 } 00:09:34.578 Got JSON-RPC error response 00:09:34.578 response: 00:09:34.578 { 00:09:34.578 "code": -19, 00:09:34.578 "message": "No such device" 00:09:34.578 } 00:09:34.578 11:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:34.578 11:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:34.578 11:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:34.578 11:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:34.578 11:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:34.839 aio_bdev 00:09:34.839 11:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e9fdb86c-4426-4481-9cec-f93b87754eed 00:09:34.839 11:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=e9fdb86c-4426-4481-9cec-f93b87754eed 00:09:34.839 11:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:34.839 11:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:34.839 11:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:34.839 11:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:34.839 11:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:35.099 11:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e9fdb86c-4426-4481-9cec-f93b87754eed -t 2000 00:09:35.358 [ 00:09:35.358 { 00:09:35.358 "name": "e9fdb86c-4426-4481-9cec-f93b87754eed", 00:09:35.358 "aliases": [ 00:09:35.358 "lvs/lvol" 00:09:35.358 ], 00:09:35.358 "product_name": "Logical Volume", 00:09:35.358 "block_size": 4096, 00:09:35.358 "num_blocks": 38912, 00:09:35.358 "uuid": "e9fdb86c-4426-4481-9cec-f93b87754eed", 00:09:35.358 "assigned_rate_limits": { 00:09:35.358 "rw_ios_per_sec": 0, 00:09:35.358 "rw_mbytes_per_sec": 0, 00:09:35.358 "r_mbytes_per_sec": 0, 00:09:35.358 "w_mbytes_per_sec": 0 00:09:35.358 }, 00:09:35.358 "claimed": false, 00:09:35.358 "zoned": false, 00:09:35.358 "supported_io_types": { 00:09:35.358 "read": true, 00:09:35.358 "write": true, 00:09:35.358 "unmap": true, 00:09:35.358 "flush": false, 00:09:35.358 "reset": true, 00:09:35.358 "nvme_admin": false, 00:09:35.358 "nvme_io": false, 00:09:35.358 "nvme_io_md": false, 00:09:35.358 "write_zeroes": true, 00:09:35.358 "zcopy": false, 00:09:35.358 "get_zone_info": false, 00:09:35.358 "zone_management": false, 00:09:35.358 "zone_append": false, 00:09:35.358 "compare": false, 00:09:35.358 "compare_and_write": false, 00:09:35.358 "abort": false, 00:09:35.358 "seek_hole": true, 00:09:35.358 "seek_data": true, 00:09:35.358 "copy": false, 00:09:35.358 "nvme_iov_md": false 00:09:35.358 }, 00:09:35.358 "driver_specific": { 00:09:35.358 "lvol": { 00:09:35.358 "lvol_store_uuid": "299b443c-60a9-408f-8887-92939377293d", 00:09:35.358 "base_bdev": "aio_bdev", 00:09:35.358 "thin_provision": false, 00:09:35.358 "num_allocated_clusters": 38, 00:09:35.358 "snapshot": false, 00:09:35.358 "clone": false, 00:09:35.358 "esnap_clone": false 00:09:35.358 } 00:09:35.358 } 00:09:35.358 } 00:09:35.358 ] 00:09:35.358 11:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:35.358 11:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 299b443c-60a9-408f-8887-92939377293d 00:09:35.358 11:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:35.617 11:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:35.617 11:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 299b443c-60a9-408f-8887-92939377293d 00:09:35.617 11:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:35.877 11:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:35.877 11:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e9fdb86c-4426-4481-9cec-f93b87754eed 00:09:36.137 11:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 299b443c-60a9-408f-8887-92939377293d 00:09:36.396 11:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:36.657 11:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:36.918 00:09:36.918 real 0m22.235s 00:09:36.918 user 0m56.221s 00:09:36.918 sys 0m4.705s 00:09:36.918 11:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.918 11:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:36.918 ************************************ 00:09:36.918 END TEST lvs_grow_dirty 00:09:36.918 ************************************ 00:09:36.918 11:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:36.918 11:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:36.918 11:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:36.918 11:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:36.918 11:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:36.918 11:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:36.918 11:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:36.918 11:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:36.918 11:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:36.918 nvmf_trace.0 00:09:36.918 11:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:36.918 11:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:36.918 11:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:36.918 11:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:36.918 11:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:36.918 11:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:36.918 11:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:36.918 11:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:36.918 rmmod nvme_tcp 00:09:36.918 rmmod nvme_fabrics 00:09:36.918 rmmod nvme_keyring 00:09:36.918 11:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:36.918 11:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:36.918 11:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:36.918 11:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2863178 ']' 00:09:36.918 11:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2863178 00:09:36.918 11:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2863178 ']' 00:09:36.918 11:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2863178 00:09:36.918 11:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:36.918 11:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:36.918 11:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2863178 00:09:36.918 11:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:36.919 11:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:36.919 11:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2863178' 00:09:36.919 killing process with pid 2863178 00:09:36.919 11:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2863178 00:09:36.919 11:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2863178 00:09:38.299 11:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:38.299 11:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:38.299 11:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:38.299 11:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:38.299 11:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:38.299 11:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:38.299 11:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:38.299 11:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:38.299 11:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:38.299 11:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.299 11:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.299 11:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.206 11:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:40.206 00:09:40.206 real 0m48.927s 00:09:40.206 user 1m23.532s 00:09:40.206 sys 0m8.697s 00:09:40.206 11:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.207 11:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:40.207 ************************************ 00:09:40.207 END TEST nvmf_lvs_grow 00:09:40.207 ************************************ 00:09:40.207 11:38:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:40.207 11:38:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:40.207 11:38:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.207 11:38:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:40.207 ************************************ 00:09:40.207 START TEST nvmf_bdev_io_wait 00:09:40.207 ************************************ 00:09:40.207 11:38:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:40.207 * Looking for test storage... 00:09:40.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:40.207 11:38:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:40.207 11:38:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:09:40.207 11:38:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:40.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.207 --rc genhtml_branch_coverage=1 00:09:40.207 --rc genhtml_function_coverage=1 00:09:40.207 --rc genhtml_legend=1 00:09:40.207 --rc geninfo_all_blocks=1 00:09:40.207 --rc geninfo_unexecuted_blocks=1 00:09:40.207 00:09:40.207 ' 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:40.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.207 --rc genhtml_branch_coverage=1 00:09:40.207 --rc genhtml_function_coverage=1 00:09:40.207 --rc genhtml_legend=1 00:09:40.207 --rc geninfo_all_blocks=1 00:09:40.207 --rc geninfo_unexecuted_blocks=1 00:09:40.207 00:09:40.207 ' 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:40.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.207 --rc genhtml_branch_coverage=1 00:09:40.207 --rc genhtml_function_coverage=1 00:09:40.207 --rc genhtml_legend=1 00:09:40.207 --rc geninfo_all_blocks=1 00:09:40.207 --rc geninfo_unexecuted_blocks=1 00:09:40.207 00:09:40.207 ' 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:40.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.207 --rc genhtml_branch_coverage=1 00:09:40.207 --rc genhtml_function_coverage=1 00:09:40.207 --rc genhtml_legend=1 00:09:40.207 --rc geninfo_all_blocks=1 00:09:40.207 --rc geninfo_unexecuted_blocks=1 00:09:40.207 00:09:40.207 ' 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.207 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:40.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:40.468 11:38:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:42.374 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:42.374 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:42.374 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:42.374 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:42.374 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:42.634 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:42.634 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:42.634 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:42.634 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:42.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:42.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:09:42.634 00:09:42.634 --- 10.0.0.2 ping statistics --- 00:09:42.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.634 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:09:42.634 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:42.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:42.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:09:42.634 00:09:42.634 --- 10.0.0.1 ping statistics --- 00:09:42.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.634 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:09:42.634 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:42.634 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:42.634 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:42.634 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:42.634 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:42.634 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:42.634 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:42.634 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:42.634 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:42.634 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:42.634 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:42.634 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:42.634 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:42.634 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2865974 00:09:42.634 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2865974 00:09:42.634 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:42.634 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2865974 ']' 00:09:42.634 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.634 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.634 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.634 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.634 11:38:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:42.634 [2024-11-18 11:38:08.417429] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:42.634 [2024-11-18 11:38:08.417615] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.894 [2024-11-18 11:38:08.576744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:42.894 [2024-11-18 11:38:08.721061] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:42.894 [2024-11-18 11:38:08.721156] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:42.894 [2024-11-18 11:38:08.721183] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:42.894 [2024-11-18 11:38:08.721207] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:42.894 [2024-11-18 11:38:08.721227] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:42.894 [2024-11-18 11:38:08.724143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.894 [2024-11-18 11:38:08.724212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:42.894 [2024-11-18 11:38:08.724306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.894 [2024-11-18 11:38:08.724312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:43.839 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.839 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:43.839 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:43.839 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:43.839 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:43.839 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:43.839 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:43.839 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.839 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:43.839 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.839 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:43.839 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.839 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:43.839 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.839 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:43.839 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.839 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:43.839 [2024-11-18 11:38:09.657727] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:43.839 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.839 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:43.839 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.839 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:44.099 Malloc0 00:09:44.099 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.099 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:44.099 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.099 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:44.099 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.099 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:44.099 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.099 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:44.099 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.099 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:44.099 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.099 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:44.099 [2024-11-18 11:38:09.764791] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:44.099 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.099 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2866135 00:09:44.099 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2866137 00:09:44.099 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:44.099 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:44.099 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:44.099 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:44.099 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2866139 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:44.100 { 00:09:44.100 "params": { 00:09:44.100 "name": "Nvme$subsystem", 00:09:44.100 "trtype": "$TEST_TRANSPORT", 00:09:44.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:44.100 "adrfam": "ipv4", 00:09:44.100 "trsvcid": "$NVMF_PORT", 00:09:44.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:44.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:44.100 "hdgst": ${hdgst:-false}, 00:09:44.100 "ddgst": ${ddgst:-false} 00:09:44.100 }, 00:09:44.100 "method": "bdev_nvme_attach_controller" 00:09:44.100 } 00:09:44.100 EOF 00:09:44.100 )") 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2866141 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:44.100 { 00:09:44.100 "params": { 00:09:44.100 "name": "Nvme$subsystem", 00:09:44.100 "trtype": "$TEST_TRANSPORT", 00:09:44.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:44.100 "adrfam": "ipv4", 00:09:44.100 "trsvcid": "$NVMF_PORT", 00:09:44.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:44.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:44.100 "hdgst": ${hdgst:-false}, 00:09:44.100 "ddgst": ${ddgst:-false} 00:09:44.100 }, 00:09:44.100 "method": "bdev_nvme_attach_controller" 00:09:44.100 } 00:09:44.100 EOF 00:09:44.100 )") 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:44.100 { 00:09:44.100 "params": { 00:09:44.100 "name": "Nvme$subsystem", 00:09:44.100 "trtype": "$TEST_TRANSPORT", 00:09:44.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:44.100 "adrfam": "ipv4", 00:09:44.100 "trsvcid": "$NVMF_PORT", 00:09:44.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:44.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:44.100 "hdgst": ${hdgst:-false}, 00:09:44.100 "ddgst": ${ddgst:-false} 00:09:44.100 }, 00:09:44.100 "method": "bdev_nvme_attach_controller" 00:09:44.100 } 00:09:44.100 EOF 00:09:44.100 )") 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:44.100 { 00:09:44.100 "params": { 00:09:44.100 "name": "Nvme$subsystem", 00:09:44.100 "trtype": "$TEST_TRANSPORT", 00:09:44.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:44.100 "adrfam": "ipv4", 00:09:44.100 "trsvcid": "$NVMF_PORT", 00:09:44.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:44.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:44.100 "hdgst": ${hdgst:-false}, 00:09:44.100 "ddgst": ${ddgst:-false} 00:09:44.100 }, 00:09:44.100 "method": "bdev_nvme_attach_controller" 00:09:44.100 } 00:09:44.100 EOF 00:09:44.100 )") 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2866135 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:44.100 "params": { 00:09:44.100 "name": "Nvme1", 00:09:44.100 "trtype": "tcp", 00:09:44.100 "traddr": "10.0.0.2", 00:09:44.100 "adrfam": "ipv4", 00:09:44.100 "trsvcid": "4420", 00:09:44.100 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:44.100 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:44.100 "hdgst": false, 00:09:44.100 "ddgst": false 00:09:44.100 }, 00:09:44.100 "method": "bdev_nvme_attach_controller" 00:09:44.100 }' 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:44.100 "params": { 00:09:44.100 "name": "Nvme1", 00:09:44.100 "trtype": "tcp", 00:09:44.100 "traddr": "10.0.0.2", 00:09:44.100 "adrfam": "ipv4", 00:09:44.100 "trsvcid": "4420", 00:09:44.100 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:44.100 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:44.100 "hdgst": false, 00:09:44.100 "ddgst": false 00:09:44.100 }, 00:09:44.100 "method": "bdev_nvme_attach_controller" 00:09:44.100 }' 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:44.100 "params": { 00:09:44.100 "name": "Nvme1", 00:09:44.100 "trtype": "tcp", 00:09:44.100 "traddr": "10.0.0.2", 00:09:44.100 "adrfam": "ipv4", 00:09:44.100 "trsvcid": "4420", 00:09:44.100 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:44.100 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:44.100 "hdgst": false, 00:09:44.100 "ddgst": false 00:09:44.100 }, 00:09:44.100 "method": "bdev_nvme_attach_controller" 00:09:44.100 }' 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:44.100 11:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:44.100 "params": { 00:09:44.100 "name": "Nvme1", 00:09:44.100 "trtype": "tcp", 00:09:44.100 "traddr": "10.0.0.2", 00:09:44.100 "adrfam": "ipv4", 00:09:44.100 "trsvcid": "4420", 00:09:44.100 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:44.100 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:44.100 "hdgst": false, 00:09:44.100 "ddgst": false 00:09:44.100 }, 00:09:44.100 "method": "bdev_nvme_attach_controller" 00:09:44.100 }' 00:09:44.100 [2024-11-18 11:38:09.855170] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:44.100 [2024-11-18 11:38:09.855168] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:44.100 [2024-11-18 11:38:09.855168] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:44.100 [2024-11-18 11:38:09.855326] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-18 11:38:09.855327] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-18 11:38:09.855329] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:44.100 --proc-type=auto ] 00:09:44.100 --proc-type=auto ] 00:09:44.100 [2024-11-18 11:38:09.856263] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:44.100 [2024-11-18 11:38:09.856395] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:44.359 [2024-11-18 11:38:10.115743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.359 [2024-11-18 11:38:10.218103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.359 [2024-11-18 11:38:10.239140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:44.617 [2024-11-18 11:38:10.320014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.617 [2024-11-18 11:38:10.343334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:44.617 [2024-11-18 11:38:10.395664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.617 [2024-11-18 11:38:10.440846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:44.876 [2024-11-18 11:38:10.514122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:44.876 Running I/O for 1 seconds... 00:09:45.136 Running I/O for 1 seconds... 00:09:45.136 Running I/O for 1 seconds... 00:09:45.136 Running I/O for 1 seconds... 00:09:46.076 148800.00 IOPS, 581.25 MiB/s 00:09:46.076 Latency(us) 00:09:46.076 [2024-11-18T10:38:11.961Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.076 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:46.076 Nvme1n1 : 1.00 148491.12 580.04 0.00 0.00 857.60 377.74 2038.90 00:09:46.076 [2024-11-18T10:38:11.961Z] =================================================================================================================== 00:09:46.076 [2024-11-18T10:38:11.961Z] Total : 148491.12 580.04 0.00 0.00 857.60 377.74 2038.90 00:09:46.076 8125.00 IOPS, 31.74 MiB/s 00:09:46.076 Latency(us) 00:09:46.076 [2024-11-18T10:38:11.961Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.076 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:46.076 Nvme1n1 : 1.01 8180.44 31.95 0.00 0.00 15566.90 3568.07 22816.24 00:09:46.076 [2024-11-18T10:38:11.961Z] =================================================================================================================== 00:09:46.076 [2024-11-18T10:38:11.961Z] Total : 8180.44 31.95 0.00 0.00 15566.90 3568.07 22816.24 00:09:46.076 5170.00 IOPS, 20.20 MiB/s 00:09:46.076 Latency(us) 00:09:46.076 [2024-11-18T10:38:11.961Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.076 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:46.076 Nvme1n1 : 1.01 5231.02 20.43 0.00 0.00 24300.69 11990.66 35146.71 00:09:46.076 [2024-11-18T10:38:11.961Z] =================================================================================================================== 00:09:46.076 [2024-11-18T10:38:11.961Z] Total : 5231.02 20.43 0.00 0.00 24300.69 11990.66 35146.71 00:09:46.336 7265.00 IOPS, 28.38 MiB/s 00:09:46.336 Latency(us) 00:09:46.336 [2024-11-18T10:38:12.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.336 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:46.336 Nvme1n1 : 1.01 7330.90 28.64 0.00 0.00 17368.95 3422.44 25631.86 00:09:46.336 [2024-11-18T10:38:12.221Z] =================================================================================================================== 00:09:46.336 [2024-11-18T10:38:12.221Z] Total : 7330.90 28.64 0.00 0.00 17368.95 3422.44 25631.86 00:09:46.596 11:38:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2866137 00:09:46.855 11:38:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2866139 00:09:46.855 11:38:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2866141 00:09:46.855 11:38:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:46.855 11:38:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.855 11:38:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:46.855 11:38:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.855 11:38:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:46.855 11:38:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:46.855 11:38:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:46.855 11:38:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:46.855 11:38:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:46.855 11:38:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:46.855 11:38:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:46.855 11:38:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:46.855 rmmod nvme_tcp 00:09:46.855 rmmod nvme_fabrics 00:09:46.855 rmmod nvme_keyring 00:09:46.855 11:38:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:46.855 11:38:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:46.855 11:38:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:46.855 11:38:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2865974 ']' 00:09:46.855 11:38:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2865974 00:09:46.855 11:38:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2865974 ']' 00:09:46.855 11:38:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2865974 00:09:46.855 11:38:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:46.855 11:38:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:46.855 11:38:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2865974 00:09:47.114 11:38:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:47.114 11:38:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:47.114 11:38:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2865974' 00:09:47.114 killing process with pid 2865974 00:09:47.114 11:38:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2865974 00:09:47.114 11:38:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2865974 00:09:48.053 11:38:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:48.053 11:38:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:48.053 11:38:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:48.053 11:38:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:48.053 11:38:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:48.053 11:38:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:48.053 11:38:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:48.053 11:38:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:48.053 11:38:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:48.053 11:38:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.053 11:38:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:48.053 11:38:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.589 11:38:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:50.589 00:09:50.589 real 0m9.912s 00:09:50.589 user 0m28.016s 00:09:50.589 sys 0m4.307s 00:09:50.589 11:38:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.590 11:38:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:50.590 ************************************ 00:09:50.590 END TEST nvmf_bdev_io_wait 00:09:50.590 ************************************ 00:09:50.590 11:38:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:50.590 11:38:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:50.590 11:38:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.590 11:38:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:50.590 ************************************ 00:09:50.590 START TEST nvmf_queue_depth 00:09:50.590 ************************************ 00:09:50.590 11:38:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:50.590 * Looking for test storage... 00:09:50.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:50.590 11:38:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:50.590 11:38:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:50.590 11:38:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:50.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.590 --rc genhtml_branch_coverage=1 00:09:50.590 --rc genhtml_function_coverage=1 00:09:50.590 --rc genhtml_legend=1 00:09:50.590 --rc geninfo_all_blocks=1 00:09:50.590 --rc geninfo_unexecuted_blocks=1 00:09:50.590 00:09:50.590 ' 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:50.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.590 --rc genhtml_branch_coverage=1 00:09:50.590 --rc genhtml_function_coverage=1 00:09:50.590 --rc genhtml_legend=1 00:09:50.590 --rc geninfo_all_blocks=1 00:09:50.590 --rc geninfo_unexecuted_blocks=1 00:09:50.590 00:09:50.590 ' 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:50.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.590 --rc genhtml_branch_coverage=1 00:09:50.590 --rc genhtml_function_coverage=1 00:09:50.590 --rc genhtml_legend=1 00:09:50.590 --rc geninfo_all_blocks=1 00:09:50.590 --rc geninfo_unexecuted_blocks=1 00:09:50.590 00:09:50.590 ' 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:50.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.590 --rc genhtml_branch_coverage=1 00:09:50.590 --rc genhtml_function_coverage=1 00:09:50.590 --rc genhtml_legend=1 00:09:50.590 --rc geninfo_all_blocks=1 00:09:50.590 --rc geninfo_unexecuted_blocks=1 00:09:50.590 00:09:50.590 ' 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.590 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.591 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.591 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:50.591 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.591 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:50.591 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:50.591 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:50.591 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:50.591 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.591 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.591 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:50.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:50.591 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:50.591 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:50.591 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:50.591 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:50.591 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:50.591 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:50.591 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:50.591 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:50.591 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:50.591 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:50.591 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:50.591 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:50.591 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.591 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:50.591 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.591 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:50.591 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:50.591 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:50.591 11:38:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:52.500 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:52.500 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:52.500 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:52.500 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:52.500 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:52.500 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:52.500 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:52.500 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:52.500 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:52.500 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:52.500 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:52.500 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:52.500 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:52.500 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:52.500 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:52.500 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:52.500 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:52.501 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:52.501 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:52.501 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:52.501 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:52.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:52.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:09:52.501 00:09:52.501 --- 10.0.0.2 ping statistics --- 00:09:52.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.501 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:52.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:52.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:09:52.501 00:09:52.501 --- 10.0.0.1 ping statistics --- 00:09:52.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.501 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:52.501 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:52.502 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2868541 00:09:52.502 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:52.502 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2868541 00:09:52.502 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2868541 ']' 00:09:52.502 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.502 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:52.502 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.502 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:52.502 11:38:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:52.502 [2024-11-18 11:38:18.378996] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:52.502 [2024-11-18 11:38:18.379140] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.761 [2024-11-18 11:38:18.533286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.021 [2024-11-18 11:38:18.671788] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:53.021 [2024-11-18 11:38:18.671887] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:53.021 [2024-11-18 11:38:18.671913] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:53.021 [2024-11-18 11:38:18.671936] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:53.021 [2024-11-18 11:38:18.671955] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:53.021 [2024-11-18 11:38:18.673618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.590 11:38:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:53.590 11:38:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:53.590 11:38:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:53.590 11:38:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:53.590 11:38:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:53.590 11:38:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:53.590 11:38:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:53.590 11:38:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.590 11:38:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:53.590 [2024-11-18 11:38:19.341811] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:53.590 11:38:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.590 11:38:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:53.590 11:38:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.590 11:38:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:53.590 Malloc0 00:09:53.590 11:38:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.590 11:38:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:53.590 11:38:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.590 11:38:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:53.590 11:38:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.590 11:38:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:53.590 11:38:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.590 11:38:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:53.590 11:38:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.590 11:38:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:53.590 11:38:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.590 11:38:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:53.590 [2024-11-18 11:38:19.464905] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:53.590 11:38:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.590 11:38:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2868764 00:09:53.590 11:38:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:53.590 11:38:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:53.590 11:38:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2868764 /var/tmp/bdevperf.sock 00:09:53.590 11:38:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2868764 ']' 00:09:53.590 11:38:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:53.590 11:38:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:53.590 11:38:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:53.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:53.590 11:38:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:53.590 11:38:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:53.848 [2024-11-18 11:38:19.555324] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:53.848 [2024-11-18 11:38:19.555463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2868764 ] 00:09:53.848 [2024-11-18 11:38:19.691769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.107 [2024-11-18 11:38:19.828279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.042 11:38:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:55.042 11:38:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:55.042 11:38:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:55.042 11:38:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.042 11:38:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:55.042 NVMe0n1 00:09:55.042 11:38:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.042 11:38:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:55.042 Running I/O for 10 seconds... 00:09:57.074 5633.00 IOPS, 22.00 MiB/s [2024-11-18T10:38:24.343Z] 5807.00 IOPS, 22.68 MiB/s [2024-11-18T10:38:25.285Z] 5998.67 IOPS, 23.43 MiB/s [2024-11-18T10:38:26.225Z] 6011.75 IOPS, 23.48 MiB/s [2024-11-18T10:38:27.166Z] 6028.00 IOPS, 23.55 MiB/s [2024-11-18T10:38:28.103Z] 6068.83 IOPS, 23.71 MiB/s [2024-11-18T10:38:29.043Z] 6086.71 IOPS, 23.78 MiB/s [2024-11-18T10:38:29.983Z] 6103.62 IOPS, 23.84 MiB/s [2024-11-18T10:38:31.367Z] 6109.33 IOPS, 23.86 MiB/s [2024-11-18T10:38:31.367Z] 6107.30 IOPS, 23.86 MiB/s 00:10:05.482 Latency(us) 00:10:05.482 [2024-11-18T10:38:31.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.482 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:05.482 Verification LBA range: start 0x0 length 0x4000 00:10:05.482 NVMe0n1 : 10.14 6125.59 23.93 0.00 0.00 166043.28 28350.39 99420.54 00:10:05.482 [2024-11-18T10:38:31.367Z] =================================================================================================================== 00:10:05.482 [2024-11-18T10:38:31.367Z] Total : 6125.59 23.93 0.00 0.00 166043.28 28350.39 99420.54 00:10:05.482 { 00:10:05.482 "results": [ 00:10:05.482 { 00:10:05.482 "job": "NVMe0n1", 00:10:05.482 "core_mask": "0x1", 00:10:05.482 "workload": "verify", 00:10:05.482 "status": "finished", 00:10:05.482 "verify_range": { 00:10:05.482 "start": 0, 00:10:05.482 "length": 16384 00:10:05.482 }, 00:10:05.482 "queue_depth": 1024, 00:10:05.482 "io_size": 4096, 00:10:05.482 "runtime": 10.137315, 00:10:05.482 "iops": 6125.586508853676, 00:10:05.482 "mibps": 23.92807230020967, 00:10:05.482 "io_failed": 0, 00:10:05.482 "io_timeout": 0, 00:10:05.482 "avg_latency_us": 166043.27690162166, 00:10:05.482 "min_latency_us": 28350.388148148148, 00:10:05.482 "max_latency_us": 99420.53925925925 00:10:05.482 } 00:10:05.482 ], 00:10:05.482 "core_count": 1 00:10:05.482 } 00:10:05.482 11:38:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2868764 00:10:05.482 11:38:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2868764 ']' 00:10:05.482 11:38:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2868764 00:10:05.482 11:38:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:05.482 11:38:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:05.482 11:38:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2868764 00:10:05.482 11:38:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:05.482 11:38:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:05.482 11:38:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2868764' 00:10:05.482 killing process with pid 2868764 00:10:05.482 11:38:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2868764 00:10:05.482 Received shutdown signal, test time was about 10.000000 seconds 00:10:05.482 00:10:05.482 Latency(us) 00:10:05.482 [2024-11-18T10:38:31.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.482 [2024-11-18T10:38:31.367Z] =================================================================================================================== 00:10:05.482 [2024-11-18T10:38:31.367Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:05.482 11:38:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2868764 00:10:06.420 11:38:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:06.420 11:38:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:06.420 11:38:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:06.420 11:38:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:06.420 11:38:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:06.420 11:38:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:06.420 11:38:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:06.420 11:38:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:06.420 rmmod nvme_tcp 00:10:06.420 rmmod nvme_fabrics 00:10:06.420 rmmod nvme_keyring 00:10:06.420 11:38:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:06.420 11:38:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:06.420 11:38:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:06.420 11:38:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2868541 ']' 00:10:06.420 11:38:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2868541 00:10:06.420 11:38:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2868541 ']' 00:10:06.420 11:38:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2868541 00:10:06.420 11:38:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:06.420 11:38:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:06.420 11:38:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2868541 00:10:06.420 11:38:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:06.420 11:38:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:06.420 11:38:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2868541' 00:10:06.420 killing process with pid 2868541 00:10:06.420 11:38:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2868541 00:10:06.420 11:38:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2868541 00:10:07.801 11:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:07.801 11:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:07.801 11:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:07.801 11:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:07.801 11:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:10:07.801 11:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:07.801 11:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:10:07.801 11:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:07.801 11:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:07.801 11:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.801 11:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.801 11:38:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.709 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:09.709 00:10:09.709 real 0m19.622s 00:10:09.709 user 0m28.002s 00:10:09.709 sys 0m3.264s 00:10:09.709 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.709 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:09.709 ************************************ 00:10:09.709 END TEST nvmf_queue_depth 00:10:09.709 ************************************ 00:10:09.709 11:38:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:09.709 11:38:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:09.709 11:38:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.709 11:38:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:09.709 ************************************ 00:10:09.709 START TEST nvmf_target_multipath 00:10:09.709 ************************************ 00:10:09.709 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:09.968 * Looking for test storage... 00:10:09.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:09.968 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:09.968 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:10:09.968 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:09.968 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:09.968 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:09.968 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:09.968 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:09.968 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:09.968 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:09.968 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:09.968 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:09.968 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:09.968 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:09.968 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:09.968 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:09.968 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:09.968 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:09.968 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:09.968 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:09.968 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:09.968 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:09.968 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:09.968 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:09.968 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:09.968 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:09.968 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:09.968 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:09.968 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:09.968 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:09.968 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:09.968 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:09.968 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:09.968 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:09.968 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:09.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.968 --rc genhtml_branch_coverage=1 00:10:09.968 --rc genhtml_function_coverage=1 00:10:09.968 --rc genhtml_legend=1 00:10:09.968 --rc geninfo_all_blocks=1 00:10:09.968 --rc geninfo_unexecuted_blocks=1 00:10:09.968 00:10:09.968 ' 00:10:09.968 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:09.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.968 --rc genhtml_branch_coverage=1 00:10:09.968 --rc genhtml_function_coverage=1 00:10:09.968 --rc genhtml_legend=1 00:10:09.968 --rc geninfo_all_blocks=1 00:10:09.968 --rc geninfo_unexecuted_blocks=1 00:10:09.968 00:10:09.969 ' 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:09.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.969 --rc genhtml_branch_coverage=1 00:10:09.969 --rc genhtml_function_coverage=1 00:10:09.969 --rc genhtml_legend=1 00:10:09.969 --rc geninfo_all_blocks=1 00:10:09.969 --rc geninfo_unexecuted_blocks=1 00:10:09.969 00:10:09.969 ' 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:09.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.969 --rc genhtml_branch_coverage=1 00:10:09.969 --rc genhtml_function_coverage=1 00:10:09.969 --rc genhtml_legend=1 00:10:09.969 --rc geninfo_all_blocks=1 00:10:09.969 --rc geninfo_unexecuted_blocks=1 00:10:09.969 00:10:09.969 ' 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:09.969 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:09.969 11:38:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:11.874 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:11.874 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:11.874 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:11.874 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:11.874 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:11.874 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:11.874 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:11.874 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:11.874 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:11.874 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:11.874 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:11.874 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:11.874 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:11.874 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:11.874 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:11.874 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:11.874 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:11.874 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:11.874 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:11.874 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:11.874 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:11.874 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:11.874 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:11.874 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:11.874 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:11.874 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:11.874 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:11.874 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:11.874 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:11.874 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:11.875 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:11.875 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:11.875 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:11.875 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:11.875 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:12.134 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:12.134 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:12.134 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:12.134 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:12.134 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:12.134 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:12.134 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:12.134 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:12.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:10:12.134 00:10:12.134 --- 10.0.0.2 ping statistics --- 00:10:12.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.134 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:10:12.134 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:12.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:10:12.134 00:10:12.134 --- 10.0.0.1 ping statistics --- 00:10:12.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.134 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:10:12.134 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.134 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:12.134 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:12.134 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.134 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:12.134 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:12.134 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.134 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:12.134 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:12.134 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:12.134 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:12.134 only one NIC for nvmf test 00:10:12.134 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:12.134 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:12.134 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:12.134 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:12.134 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:12.134 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:12.134 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:12.134 rmmod nvme_tcp 00:10:12.134 rmmod nvme_fabrics 00:10:12.134 rmmod nvme_keyring 00:10:12.134 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:12.134 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:12.134 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:12.134 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:12.134 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:12.134 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:12.135 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:12.135 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:12.135 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:12.135 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:12.135 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:12.135 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:12.135 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:12.135 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.135 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:12.135 11:38:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.670 11:38:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:14.670 11:38:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:14.670 11:38:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:14.670 11:38:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:14.670 11:38:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:14.670 11:38:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:14.670 11:38:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:14.670 11:38:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:14.670 11:38:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:14.670 11:38:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:14.670 11:38:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:14.670 11:38:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:14.670 11:38:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:14.670 11:38:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:14.670 11:38:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:14.670 11:38:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:14.670 11:38:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:14.670 11:38:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:14.670 11:38:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:14.670 11:38:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:14.670 00:10:14.670 real 0m4.439s 00:10:14.670 user 0m0.880s 00:10:14.670 sys 0m1.572s 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:14.670 ************************************ 00:10:14.670 END TEST nvmf_target_multipath 00:10:14.670 ************************************ 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:14.670 ************************************ 00:10:14.670 START TEST nvmf_zcopy 00:10:14.670 ************************************ 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:14.670 * Looking for test storage... 00:10:14.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:14.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.670 --rc genhtml_branch_coverage=1 00:10:14.670 --rc genhtml_function_coverage=1 00:10:14.670 --rc genhtml_legend=1 00:10:14.670 --rc geninfo_all_blocks=1 00:10:14.670 --rc geninfo_unexecuted_blocks=1 00:10:14.670 00:10:14.670 ' 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:14.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.670 --rc genhtml_branch_coverage=1 00:10:14.670 --rc genhtml_function_coverage=1 00:10:14.670 --rc genhtml_legend=1 00:10:14.670 --rc geninfo_all_blocks=1 00:10:14.670 --rc geninfo_unexecuted_blocks=1 00:10:14.670 00:10:14.670 ' 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:14.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.670 --rc genhtml_branch_coverage=1 00:10:14.670 --rc genhtml_function_coverage=1 00:10:14.670 --rc genhtml_legend=1 00:10:14.670 --rc geninfo_all_blocks=1 00:10:14.670 --rc geninfo_unexecuted_blocks=1 00:10:14.670 00:10:14.670 ' 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:14.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.670 --rc genhtml_branch_coverage=1 00:10:14.670 --rc genhtml_function_coverage=1 00:10:14.670 --rc genhtml_legend=1 00:10:14.670 --rc geninfo_all_blocks=1 00:10:14.670 --rc geninfo_unexecuted_blocks=1 00:10:14.670 00:10:14.670 ' 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:14.670 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:14.671 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:14.671 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:14.671 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:14.671 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:14.671 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:14.671 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:14.671 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.671 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.671 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.671 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.671 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.671 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.671 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:14.671 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.671 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:14.671 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:14.671 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:14.671 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:14.671 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:14.671 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:14.671 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:14.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:14.671 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:14.671 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:14.671 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:14.671 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:14.671 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:14.671 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:14.671 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:14.671 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:14.671 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:14.671 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.671 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.671 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.671 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:14.671 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:14.671 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:14.671 11:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:16.574 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:16.574 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:16.574 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:16.574 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:16.574 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:16.574 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:10:16.574 00:10:16.574 --- 10.0.0.2 ping statistics --- 00:10:16.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.574 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:10:16.574 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:16.574 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:16.574 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:10:16.574 00:10:16.574 --- 10.0.0.1 ping statistics --- 00:10:16.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.575 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:10:16.575 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:16.575 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:16.575 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:16.575 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:16.575 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:16.575 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:16.575 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:16.575 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:16.575 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:16.835 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:16.835 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:16.835 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:16.835 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:16.835 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2874264 00:10:16.835 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:16.835 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2874264 00:10:16.835 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2874264 ']' 00:10:16.835 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.835 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:16.835 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.835 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:16.835 11:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:16.835 [2024-11-18 11:38:42.562959] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:16.835 [2024-11-18 11:38:42.563099] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.835 [2024-11-18 11:38:42.709958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.096 [2024-11-18 11:38:42.844471] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:17.096 [2024-11-18 11:38:42.844574] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:17.096 [2024-11-18 11:38:42.844600] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:17.096 [2024-11-18 11:38:42.844624] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:17.096 [2024-11-18 11:38:42.844644] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:17.096 [2024-11-18 11:38:42.846281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.035 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.035 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:18.035 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:18.035 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:18.035 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.035 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:18.035 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:18.035 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:18.035 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.035 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.035 [2024-11-18 11:38:43.604282] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:18.035 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.035 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:18.035 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.035 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.035 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.035 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:18.035 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.035 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.035 [2024-11-18 11:38:43.620592] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:18.035 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.035 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:18.035 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.035 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.035 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.035 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:18.035 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.035 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.035 malloc0 00:10:18.035 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.035 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:18.035 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.035 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.035 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.036 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:18.036 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:18.036 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:18.036 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:18.036 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:18.036 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:18.036 { 00:10:18.036 "params": { 00:10:18.036 "name": "Nvme$subsystem", 00:10:18.036 "trtype": "$TEST_TRANSPORT", 00:10:18.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:18.036 "adrfam": "ipv4", 00:10:18.036 "trsvcid": "$NVMF_PORT", 00:10:18.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:18.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:18.036 "hdgst": ${hdgst:-false}, 00:10:18.036 "ddgst": ${ddgst:-false} 00:10:18.036 }, 00:10:18.036 "method": "bdev_nvme_attach_controller" 00:10:18.036 } 00:10:18.036 EOF 00:10:18.036 )") 00:10:18.036 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:18.036 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:18.036 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:18.036 11:38:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:18.036 "params": { 00:10:18.036 "name": "Nvme1", 00:10:18.036 "trtype": "tcp", 00:10:18.036 "traddr": "10.0.0.2", 00:10:18.036 "adrfam": "ipv4", 00:10:18.036 "trsvcid": "4420", 00:10:18.036 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:18.036 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:18.036 "hdgst": false, 00:10:18.036 "ddgst": false 00:10:18.036 }, 00:10:18.036 "method": "bdev_nvme_attach_controller" 00:10:18.036 }' 00:10:18.036 [2024-11-18 11:38:43.783423] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:18.036 [2024-11-18 11:38:43.783585] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2874424 ] 00:10:18.295 [2024-11-18 11:38:43.940077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.295 [2024-11-18 11:38:44.077498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.862 Running I/O for 10 seconds... 00:10:20.737 4082.00 IOPS, 31.89 MiB/s [2024-11-18T10:38:48.005Z] 4148.50 IOPS, 32.41 MiB/s [2024-11-18T10:38:48.588Z] 4186.00 IOPS, 32.70 MiB/s [2024-11-18T10:38:49.988Z] 4187.00 IOPS, 32.71 MiB/s [2024-11-18T10:38:50.925Z] 4196.20 IOPS, 32.78 MiB/s [2024-11-18T10:38:51.863Z] 4198.67 IOPS, 32.80 MiB/s [2024-11-18T10:38:52.908Z] 4203.00 IOPS, 32.84 MiB/s [2024-11-18T10:38:53.846Z] 4200.38 IOPS, 32.82 MiB/s [2024-11-18T10:38:54.784Z] 4201.00 IOPS, 32.82 MiB/s [2024-11-18T10:38:54.784Z] 4201.80 IOPS, 32.83 MiB/s 00:10:28.899 Latency(us) 00:10:28.899 [2024-11-18T10:38:54.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:28.899 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:28.899 Verification LBA range: start 0x0 length 0x1000 00:10:28.900 Nvme1n1 : 10.02 4205.37 32.85 0.00 0.00 30357.18 5121.52 40777.96 00:10:28.900 [2024-11-18T10:38:54.785Z] =================================================================================================================== 00:10:28.900 [2024-11-18T10:38:54.785Z] Total : 4205.37 32.85 0.00 0.00 30357.18 5121.52 40777.96 00:10:29.839 11:38:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2875754 00:10:29.839 11:38:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:29.839 11:38:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.839 11:38:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:29.839 11:38:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:29.839 11:38:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:29.839 11:38:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:29.839 11:38:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:29.839 11:38:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:29.839 { 00:10:29.839 "params": { 00:10:29.839 "name": "Nvme$subsystem", 00:10:29.839 "trtype": "$TEST_TRANSPORT", 00:10:29.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:29.839 "adrfam": "ipv4", 00:10:29.839 "trsvcid": "$NVMF_PORT", 00:10:29.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:29.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:29.839 "hdgst": ${hdgst:-false}, 00:10:29.839 "ddgst": ${ddgst:-false} 00:10:29.839 }, 00:10:29.839 "method": "bdev_nvme_attach_controller" 00:10:29.839 } 00:10:29.839 EOF 00:10:29.839 )") 00:10:29.839 11:38:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:29.839 [2024-11-18 11:38:55.498732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.839 [2024-11-18 11:38:55.498812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.839 11:38:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:29.839 11:38:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:29.839 11:38:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:29.839 "params": { 00:10:29.839 "name": "Nvme1", 00:10:29.839 "trtype": "tcp", 00:10:29.839 "traddr": "10.0.0.2", 00:10:29.839 "adrfam": "ipv4", 00:10:29.839 "trsvcid": "4420", 00:10:29.839 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:29.839 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:29.839 "hdgst": false, 00:10:29.839 "ddgst": false 00:10:29.839 }, 00:10:29.839 "method": "bdev_nvme_attach_controller" 00:10:29.839 }' 00:10:29.839 [2024-11-18 11:38:55.506686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.839 [2024-11-18 11:38:55.506720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.839 [2024-11-18 11:38:55.514654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.839 [2024-11-18 11:38:55.514683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.839 [2024-11-18 11:38:55.522698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.840 [2024-11-18 11:38:55.522728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.840 [2024-11-18 11:38:55.530745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.840 [2024-11-18 11:38:55.530794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.840 [2024-11-18 11:38:55.538746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.840 [2024-11-18 11:38:55.538808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.840 [2024-11-18 11:38:55.546800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.840 [2024-11-18 11:38:55.546831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.840 [2024-11-18 11:38:55.554819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.840 [2024-11-18 11:38:55.554864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.840 [2024-11-18 11:38:55.562807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.840 [2024-11-18 11:38:55.562851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.840 [2024-11-18 11:38:55.570862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.840 [2024-11-18 11:38:55.570891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.840 [2024-11-18 11:38:55.578849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.840 [2024-11-18 11:38:55.578877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.840 [2024-11-18 11:38:55.579120] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:29.840 [2024-11-18 11:38:55.579227] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2875754 ] 00:10:29.840 [2024-11-18 11:38:55.586888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.840 [2024-11-18 11:38:55.586915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.840 [2024-11-18 11:38:55.594909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.840 [2024-11-18 11:38:55.594937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.840 [2024-11-18 11:38:55.602926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.840 [2024-11-18 11:38:55.602953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.840 [2024-11-18 11:38:55.610960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.840 [2024-11-18 11:38:55.610991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.840 [2024-11-18 11:38:55.618960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.840 [2024-11-18 11:38:55.618988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.840 [2024-11-18 11:38:55.626988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.840 [2024-11-18 11:38:55.627015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.840 [2024-11-18 11:38:55.635011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.840 [2024-11-18 11:38:55.635038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.840 [2024-11-18 11:38:55.643016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.840 [2024-11-18 11:38:55.643043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.840 [2024-11-18 11:38:55.651056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.840 [2024-11-18 11:38:55.651083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.840 [2024-11-18 11:38:55.659086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.840 [2024-11-18 11:38:55.659113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.840 [2024-11-18 11:38:55.667115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.840 [2024-11-18 11:38:55.667149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.840 [2024-11-18 11:38:55.675156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.840 [2024-11-18 11:38:55.675197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.840 [2024-11-18 11:38:55.683178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.840 [2024-11-18 11:38:55.683213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.840 [2024-11-18 11:38:55.691187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.840 [2024-11-18 11:38:55.691221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.840 [2024-11-18 11:38:55.699226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.840 [2024-11-18 11:38:55.699260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.840 [2024-11-18 11:38:55.707233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.840 [2024-11-18 11:38:55.707267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.840 [2024-11-18 11:38:55.715273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.840 [2024-11-18 11:38:55.715307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.840 [2024-11-18 11:38:55.723316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.840 [2024-11-18 11:38:55.723350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.101 [2024-11-18 11:38:55.728972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.101 [2024-11-18 11:38:55.731301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.101 [2024-11-18 11:38:55.731334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.101 [2024-11-18 11:38:55.739343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.101 [2024-11-18 11:38:55.739377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.101 [2024-11-18 11:38:55.747419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.101 [2024-11-18 11:38:55.747468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.101 [2024-11-18 11:38:55.755401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.101 [2024-11-18 11:38:55.755448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.101 [2024-11-18 11:38:55.763419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.101 [2024-11-18 11:38:55.763453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.101 [2024-11-18 11:38:55.771411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.101 [2024-11-18 11:38:55.771444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.101 [2024-11-18 11:38:55.779468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.101 [2024-11-18 11:38:55.779510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.101 [2024-11-18 11:38:55.787485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.101 [2024-11-18 11:38:55.787547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.101 [2024-11-18 11:38:55.795487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.101 [2024-11-18 11:38:55.795556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.101 [2024-11-18 11:38:55.803580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.101 [2024-11-18 11:38:55.803610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.101 [2024-11-18 11:38:55.811590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.101 [2024-11-18 11:38:55.811622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.101 [2024-11-18 11:38:55.819597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.101 [2024-11-18 11:38:55.819627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.101 [2024-11-18 11:38:55.827617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.101 [2024-11-18 11:38:55.827648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.101 [2024-11-18 11:38:55.835615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.101 [2024-11-18 11:38:55.835645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.101 [2024-11-18 11:38:55.843654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.101 [2024-11-18 11:38:55.843685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.101 [2024-11-18 11:38:55.851675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.101 [2024-11-18 11:38:55.851704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.101 [2024-11-18 11:38:55.859666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.101 [2024-11-18 11:38:55.859695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.102 [2024-11-18 11:38:55.867030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.102 [2024-11-18 11:38:55.867724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.102 [2024-11-18 11:38:55.867762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.102 [2024-11-18 11:38:55.875743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.102 [2024-11-18 11:38:55.875792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.102 [2024-11-18 11:38:55.883817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.102 [2024-11-18 11:38:55.883867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.102 [2024-11-18 11:38:55.891889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.102 [2024-11-18 11:38:55.891940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.102 [2024-11-18 11:38:55.899817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.102 [2024-11-18 11:38:55.899865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.102 [2024-11-18 11:38:55.907862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.102 [2024-11-18 11:38:55.907896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.102 [2024-11-18 11:38:55.915916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.102 [2024-11-18 11:38:55.915950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.102 [2024-11-18 11:38:55.923898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.102 [2024-11-18 11:38:55.923930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.102 [2024-11-18 11:38:55.931935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.102 [2024-11-18 11:38:55.931968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.102 [2024-11-18 11:38:55.939955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.102 [2024-11-18 11:38:55.939996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.102 [2024-11-18 11:38:55.947953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.102 [2024-11-18 11:38:55.947985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.102 [2024-11-18 11:38:55.956046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.102 [2024-11-18 11:38:55.956090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.102 [2024-11-18 11:38:55.964050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.102 [2024-11-18 11:38:55.964102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.102 [2024-11-18 11:38:55.972114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.102 [2024-11-18 11:38:55.972178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.102 [2024-11-18 11:38:55.980124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.102 [2024-11-18 11:38:55.980179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.362 [2024-11-18 11:38:55.988122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.362 [2024-11-18 11:38:55.988177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.362 [2024-11-18 11:38:55.996123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.362 [2024-11-18 11:38:55.996156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.362 [2024-11-18 11:38:56.004134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.362 [2024-11-18 11:38:56.004167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.362 [2024-11-18 11:38:56.012165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.362 [2024-11-18 11:38:56.012199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.362 [2024-11-18 11:38:56.020181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.362 [2024-11-18 11:38:56.020215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.362 [2024-11-18 11:38:56.028174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.362 [2024-11-18 11:38:56.028206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.362 [2024-11-18 11:38:56.036222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.362 [2024-11-18 11:38:56.036255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.362 [2024-11-18 11:38:56.044247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.362 [2024-11-18 11:38:56.044279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.362 [2024-11-18 11:38:56.052242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.362 [2024-11-18 11:38:56.052274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.362 [2024-11-18 11:38:56.060292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.362 [2024-11-18 11:38:56.060325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.362 [2024-11-18 11:38:56.068313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.362 [2024-11-18 11:38:56.068347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.362 [2024-11-18 11:38:56.076321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.362 [2024-11-18 11:38:56.076353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.362 [2024-11-18 11:38:56.084356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.362 [2024-11-18 11:38:56.084389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.362 [2024-11-18 11:38:56.092356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.362 [2024-11-18 11:38:56.092388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.362 [2024-11-18 11:38:56.100412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.362 [2024-11-18 11:38:56.100454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.362 [2024-11-18 11:38:56.108442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.362 [2024-11-18 11:38:56.108476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.362 [2024-11-18 11:38:56.116481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.362 [2024-11-18 11:38:56.116561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.362 [2024-11-18 11:38:56.124569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.362 [2024-11-18 11:38:56.124627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.362 [2024-11-18 11:38:56.132578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.362 [2024-11-18 11:38:56.132628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.362 [2024-11-18 11:38:56.140509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.362 [2024-11-18 11:38:56.140556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.362 [2024-11-18 11:38:56.148562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.362 [2024-11-18 11:38:56.148590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.362 [2024-11-18 11:38:56.156562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.362 [2024-11-18 11:38:56.156590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.362 [2024-11-18 11:38:56.164602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.362 [2024-11-18 11:38:56.164630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.362 [2024-11-18 11:38:56.172619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.362 [2024-11-18 11:38:56.172647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.362 [2024-11-18 11:38:56.180666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.362 [2024-11-18 11:38:56.180694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.362 [2024-11-18 11:38:56.188652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.362 [2024-11-18 11:38:56.188680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.362 [2024-11-18 11:38:56.196668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.362 [2024-11-18 11:38:56.196697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.362 [2024-11-18 11:38:56.204695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.362 [2024-11-18 11:38:56.204723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.362 [2024-11-18 11:38:56.212711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.362 [2024-11-18 11:38:56.212739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.362 [2024-11-18 11:38:56.220713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.362 [2024-11-18 11:38:56.220742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.362 [2024-11-18 11:38:56.228777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.362 [2024-11-18 11:38:56.228812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.363 [2024-11-18 11:38:56.236798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.363 [2024-11-18 11:38:56.236843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.363 [2024-11-18 11:38:56.244848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.363 [2024-11-18 11:38:56.244883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.622 [2024-11-18 11:38:56.252875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.622 [2024-11-18 11:38:56.252913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.622 [2024-11-18 11:38:56.260892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.622 [2024-11-18 11:38:56.260929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.622 [2024-11-18 11:38:56.268932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.622 [2024-11-18 11:38:56.268969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.622 [2024-11-18 11:38:56.276949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.622 [2024-11-18 11:38:56.276993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.622 [2024-11-18 11:38:56.284946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.622 [2024-11-18 11:38:56.284979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.622 [2024-11-18 11:38:56.293004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.622 [2024-11-18 11:38:56.293038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.622 [2024-11-18 11:38:56.301006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.622 [2024-11-18 11:38:56.301040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.622 [2024-11-18 11:38:56.309019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.622 [2024-11-18 11:38:56.309052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.622 [2024-11-18 11:38:56.317067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.622 [2024-11-18 11:38:56.317103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.622 [2024-11-18 11:38:56.325090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.622 [2024-11-18 11:38:56.325126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.622 [2024-11-18 11:38:56.333094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.622 [2024-11-18 11:38:56.333131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.622 [2024-11-18 11:38:56.341144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.622 [2024-11-18 11:38:56.341179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.622 [2024-11-18 11:38:56.349470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.622 [2024-11-18 11:38:56.349520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.623 [2024-11-18 11:38:56.357187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.623 [2024-11-18 11:38:56.357223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.623 [2024-11-18 11:38:56.365212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.623 [2024-11-18 11:38:56.365246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.623 Running I/O for 5 seconds... 00:10:30.623 [2024-11-18 11:38:56.382230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.623 [2024-11-18 11:38:56.382272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.623 [2024-11-18 11:38:56.397579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.623 [2024-11-18 11:38:56.397617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.623 [2024-11-18 11:38:56.412840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.623 [2024-11-18 11:38:56.412880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.623 [2024-11-18 11:38:56.428174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.623 [2024-11-18 11:38:56.428215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.623 [2024-11-18 11:38:56.443726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.623 [2024-11-18 11:38:56.443762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.623 [2024-11-18 11:38:56.459449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.623 [2024-11-18 11:38:56.459498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.623 [2024-11-18 11:38:56.475275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.623 [2024-11-18 11:38:56.475315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.623 [2024-11-18 11:38:56.489901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.623 [2024-11-18 11:38:56.489953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.623 [2024-11-18 11:38:56.504080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.623 [2024-11-18 11:38:56.504116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.883 [2024-11-18 11:38:56.518481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.883 [2024-11-18 11:38:56.518538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.883 [2024-11-18 11:38:56.533176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.883 [2024-11-18 11:38:56.533212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.883 [2024-11-18 11:38:56.547689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.883 [2024-11-18 11:38:56.547725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.883 [2024-11-18 11:38:56.561380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.883 [2024-11-18 11:38:56.561416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.883 [2024-11-18 11:38:56.575885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.883 [2024-11-18 11:38:56.575936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.883 [2024-11-18 11:38:56.590944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.883 [2024-11-18 11:38:56.590980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.883 [2024-11-18 11:38:56.605412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.883 [2024-11-18 11:38:56.605462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.883 [2024-11-18 11:38:56.619734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.883 [2024-11-18 11:38:56.619770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.884 [2024-11-18 11:38:56.634196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.884 [2024-11-18 11:38:56.634247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.884 [2024-11-18 11:38:56.648368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.884 [2024-11-18 11:38:56.648431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.884 [2024-11-18 11:38:56.662825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.884 [2024-11-18 11:38:56.662861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.884 [2024-11-18 11:38:56.676908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.884 [2024-11-18 11:38:56.676944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.884 [2024-11-18 11:38:56.691237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.884 [2024-11-18 11:38:56.691289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.884 [2024-11-18 11:38:56.706323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.884 [2024-11-18 11:38:56.706360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.884 [2024-11-18 11:38:56.720377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.884 [2024-11-18 11:38:56.720412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.884 [2024-11-18 11:38:56.734934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.884 [2024-11-18 11:38:56.734970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.884 [2024-11-18 11:38:56.749299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.884 [2024-11-18 11:38:56.749334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.884 [2024-11-18 11:38:56.763521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.884 [2024-11-18 11:38:56.763558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.143 [2024-11-18 11:38:56.778243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.143 [2024-11-18 11:38:56.778280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.143 [2024-11-18 11:38:56.792706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.143 [2024-11-18 11:38:56.792743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.143 [2024-11-18 11:38:56.807228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.143 [2024-11-18 11:38:56.807265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.143 [2024-11-18 11:38:56.822585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.143 [2024-11-18 11:38:56.822622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.143 [2024-11-18 11:38:56.838760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.143 [2024-11-18 11:38:56.838811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.143 [2024-11-18 11:38:56.854942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.143 [2024-11-18 11:38:56.854982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.143 [2024-11-18 11:38:56.870128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.143 [2024-11-18 11:38:56.870167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.143 [2024-11-18 11:38:56.885273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.143 [2024-11-18 11:38:56.885313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.143 [2024-11-18 11:38:56.901208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.143 [2024-11-18 11:38:56.901248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.143 [2024-11-18 11:38:56.914147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.143 [2024-11-18 11:38:56.914187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.143 [2024-11-18 11:38:56.929112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.143 [2024-11-18 11:38:56.929151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.143 [2024-11-18 11:38:56.944926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.143 [2024-11-18 11:38:56.944976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.143 [2024-11-18 11:38:56.957998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.143 [2024-11-18 11:38:56.958040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.143 [2024-11-18 11:38:56.971042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.143 [2024-11-18 11:38:56.971082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.143 [2024-11-18 11:38:56.985990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.143 [2024-11-18 11:38:56.986043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.143 [2024-11-18 11:38:57.001155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.143 [2024-11-18 11:38:57.001195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.143 [2024-11-18 11:38:57.016999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.143 [2024-11-18 11:38:57.017039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.403 [2024-11-18 11:38:57.032925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.403 [2024-11-18 11:38:57.032986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.403 [2024-11-18 11:38:57.048515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.403 [2024-11-18 11:38:57.048578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.403 [2024-11-18 11:38:57.062204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.403 [2024-11-18 11:38:57.062243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.403 [2024-11-18 11:38:57.077385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.403 [2024-11-18 11:38:57.077425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.403 [2024-11-18 11:38:57.092933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.403 [2024-11-18 11:38:57.092974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.403 [2024-11-18 11:38:57.108603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.403 [2024-11-18 11:38:57.108646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.403 [2024-11-18 11:38:57.123623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.403 [2024-11-18 11:38:57.123664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.403 [2024-11-18 11:38:57.139759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.403 [2024-11-18 11:38:57.139811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.403 [2024-11-18 11:38:57.155094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.403 [2024-11-18 11:38:57.155133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.403 [2024-11-18 11:38:57.170555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.403 [2024-11-18 11:38:57.170591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.403 [2024-11-18 11:38:57.185674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.403 [2024-11-18 11:38:57.185709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.403 [2024-11-18 11:38:57.200240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.403 [2024-11-18 11:38:57.200279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.403 [2024-11-18 11:38:57.215674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.403 [2024-11-18 11:38:57.215711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.403 [2024-11-18 11:38:57.231514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.403 [2024-11-18 11:38:57.231567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.403 [2024-11-18 11:38:57.247388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.403 [2024-11-18 11:38:57.247427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.403 [2024-11-18 11:38:57.263246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.403 [2024-11-18 11:38:57.263284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.403 [2024-11-18 11:38:57.278672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.403 [2024-11-18 11:38:57.278708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.663 [2024-11-18 11:38:57.293691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.664 [2024-11-18 11:38:57.293727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.664 [2024-11-18 11:38:57.308862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.664 [2024-11-18 11:38:57.308902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.664 [2024-11-18 11:38:57.323757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.664 [2024-11-18 11:38:57.323821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.664 [2024-11-18 11:38:57.339078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.664 [2024-11-18 11:38:57.339118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.664 [2024-11-18 11:38:57.354670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.664 [2024-11-18 11:38:57.354706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.664 [2024-11-18 11:38:57.369664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.664 [2024-11-18 11:38:57.369698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.664 8396.00 IOPS, 65.59 MiB/s [2024-11-18T10:38:57.549Z] [2024-11-18 11:38:57.385431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.664 [2024-11-18 11:38:57.385470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.664 [2024-11-18 11:38:57.398327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.664 [2024-11-18 11:38:57.398366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.664 [2024-11-18 11:38:57.413410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.664 [2024-11-18 11:38:57.413450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.664 [2024-11-18 11:38:57.428177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.664 [2024-11-18 11:38:57.428216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.664 [2024-11-18 11:38:57.443377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.664 [2024-11-18 11:38:57.443416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.664 [2024-11-18 11:38:57.458426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.664 [2024-11-18 11:38:57.458465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.664 [2024-11-18 11:38:57.473086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.664 [2024-11-18 11:38:57.473125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.664 [2024-11-18 11:38:57.487615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.664 [2024-11-18 11:38:57.487665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.664 [2024-11-18 11:38:57.502800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.664 [2024-11-18 11:38:57.502839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.664 [2024-11-18 11:38:57.517938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.664 [2024-11-18 11:38:57.517978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.664 [2024-11-18 11:38:57.532860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.664 [2024-11-18 11:38:57.532901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.664 [2024-11-18 11:38:57.548706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.664 [2024-11-18 11:38:57.548742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.924 [2024-11-18 11:38:57.563839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.924 [2024-11-18 11:38:57.563879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.924 [2024-11-18 11:38:57.578715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.924 [2024-11-18 11:38:57.578751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.924 [2024-11-18 11:38:57.594156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.924 [2024-11-18 11:38:57.594196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.924 [2024-11-18 11:38:57.609731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.924 [2024-11-18 11:38:57.609806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.924 [2024-11-18 11:38:57.623157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.924 [2024-11-18 11:38:57.623197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.924 [2024-11-18 11:38:57.637937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.924 [2024-11-18 11:38:57.637977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.924 [2024-11-18 11:38:57.652624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.924 [2024-11-18 11:38:57.652661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.924 [2024-11-18 11:38:57.667971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.924 [2024-11-18 11:38:57.668011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.924 [2024-11-18 11:38:57.680596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.924 [2024-11-18 11:38:57.680632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.924 [2024-11-18 11:38:57.695212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.924 [2024-11-18 11:38:57.695251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.924 [2024-11-18 11:38:57.709934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.924 [2024-11-18 11:38:57.709974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.924 [2024-11-18 11:38:57.724559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.924 [2024-11-18 11:38:57.724595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.924 [2024-11-18 11:38:57.739441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.924 [2024-11-18 11:38:57.739480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.924 [2024-11-18 11:38:57.754299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.924 [2024-11-18 11:38:57.754339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.924 [2024-11-18 11:38:57.769456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.924 [2024-11-18 11:38:57.769505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.924 [2024-11-18 11:38:57.784563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.924 [2024-11-18 11:38:57.784598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.924 [2024-11-18 11:38:57.799467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.924 [2024-11-18 11:38:57.799515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.183 [2024-11-18 11:38:57.814871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.183 [2024-11-18 11:38:57.814911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.183 [2024-11-18 11:38:57.830420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.183 [2024-11-18 11:38:57.830460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.183 [2024-11-18 11:38:57.846335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.183 [2024-11-18 11:38:57.846375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.183 [2024-11-18 11:38:57.858784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.183 [2024-11-18 11:38:57.858824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.183 [2024-11-18 11:38:57.873579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.183 [2024-11-18 11:38:57.873615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.183 [2024-11-18 11:38:57.888669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.183 [2024-11-18 11:38:57.888713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.183 [2024-11-18 11:38:57.903574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.183 [2024-11-18 11:38:57.903611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.183 [2024-11-18 11:38:57.918654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.183 [2024-11-18 11:38:57.918691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.183 [2024-11-18 11:38:57.934109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.183 [2024-11-18 11:38:57.934149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.183 [2024-11-18 11:38:57.948918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.183 [2024-11-18 11:38:57.948958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.183 [2024-11-18 11:38:57.964378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.183 [2024-11-18 11:38:57.964417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.183 [2024-11-18 11:38:57.979810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.183 [2024-11-18 11:38:57.979850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.183 [2024-11-18 11:38:57.995111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.183 [2024-11-18 11:38:57.995150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.183 [2024-11-18 11:38:58.010246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.183 [2024-11-18 11:38:58.010286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.183 [2024-11-18 11:38:58.025986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.183 [2024-11-18 11:38:58.026026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.183 [2024-11-18 11:38:58.038396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.183 [2024-11-18 11:38:58.038435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.183 [2024-11-18 11:38:58.053149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.183 [2024-11-18 11:38:58.053190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.183 [2024-11-18 11:38:58.068270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.183 [2024-11-18 11:38:58.068309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.443 [2024-11-18 11:38:58.083363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.443 [2024-11-18 11:38:58.083403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.443 [2024-11-18 11:38:58.099123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.443 [2024-11-18 11:38:58.099164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.443 [2024-11-18 11:38:58.114827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.443 [2024-11-18 11:38:58.114877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.443 [2024-11-18 11:38:58.130075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.443 [2024-11-18 11:38:58.130117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.443 [2024-11-18 11:38:58.146155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.443 [2024-11-18 11:38:58.146196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.443 [2024-11-18 11:38:58.161425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.444 [2024-11-18 11:38:58.161474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.444 [2024-11-18 11:38:58.176834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.444 [2024-11-18 11:38:58.176879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.444 [2024-11-18 11:38:58.192370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.444 [2024-11-18 11:38:58.192410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.444 [2024-11-18 11:38:58.207758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.444 [2024-11-18 11:38:58.207813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.444 [2024-11-18 11:38:58.221328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.444 [2024-11-18 11:38:58.221368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.444 [2024-11-18 11:38:58.236942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.444 [2024-11-18 11:38:58.236981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.444 [2024-11-18 11:38:58.252131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.444 [2024-11-18 11:38:58.252170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.444 [2024-11-18 11:38:58.267393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.444 [2024-11-18 11:38:58.267440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.444 [2024-11-18 11:38:58.283154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.444 [2024-11-18 11:38:58.283194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.444 [2024-11-18 11:38:58.298629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.444 [2024-11-18 11:38:58.298665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.444 [2024-11-18 11:38:58.313501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.444 [2024-11-18 11:38:58.313555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.444 [2024-11-18 11:38:58.329158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.444 [2024-11-18 11:38:58.329197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.704 [2024-11-18 11:38:58.344085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.704 [2024-11-18 11:38:58.344125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.704 [2024-11-18 11:38:58.359636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.704 [2024-11-18 11:38:58.359671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.704 [2024-11-18 11:38:58.372351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.704 [2024-11-18 11:38:58.372390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.704 8375.00 IOPS, 65.43 MiB/s [2024-11-18T10:38:58.589Z] [2024-11-18 11:38:58.387714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.704 [2024-11-18 11:38:58.387749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.704 [2024-11-18 11:38:58.403292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.704 [2024-11-18 11:38:58.403331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.704 [2024-11-18 11:38:58.419290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.704 [2024-11-18 11:38:58.419329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.704 [2024-11-18 11:38:58.434718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.704 [2024-11-18 11:38:58.434755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.704 [2024-11-18 11:38:58.449445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.704 [2024-11-18 11:38:58.449483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.704 [2024-11-18 11:38:58.464875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.704 [2024-11-18 11:38:58.464915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.704 [2024-11-18 11:38:58.479926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.704 [2024-11-18 11:38:58.479965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.704 [2024-11-18 11:38:58.495340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.704 [2024-11-18 11:38:58.495380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.704 [2024-11-18 11:38:58.507729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.704 [2024-11-18 11:38:58.507765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.704 [2024-11-18 11:38:58.522189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.704 [2024-11-18 11:38:58.522228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.704 [2024-11-18 11:38:58.537011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.704 [2024-11-18 11:38:58.537051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.704 [2024-11-18 11:38:58.552389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.704 [2024-11-18 11:38:58.552428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.704 [2024-11-18 11:38:58.567984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.704 [2024-11-18 11:38:58.568038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.704 [2024-11-18 11:38:58.583522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.704 [2024-11-18 11:38:58.583574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.965 [2024-11-18 11:38:58.599361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.965 [2024-11-18 11:38:58.599402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.965 [2024-11-18 11:38:58.615335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.965 [2024-11-18 11:38:58.615374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.965 [2024-11-18 11:38:58.630266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.965 [2024-11-18 11:38:58.630305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.965 [2024-11-18 11:38:58.645122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.965 [2024-11-18 11:38:58.645161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.965 [2024-11-18 11:38:58.659963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.965 [2024-11-18 11:38:58.660003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.965 [2024-11-18 11:38:58.674391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.965 [2024-11-18 11:38:58.674431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.965 [2024-11-18 11:38:58.688848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.965 [2024-11-18 11:38:58.688888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.965 [2024-11-18 11:38:58.703028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.965 [2024-11-18 11:38:58.703067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.965 [2024-11-18 11:38:58.717767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.965 [2024-11-18 11:38:58.717818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.965 [2024-11-18 11:38:58.733538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.965 [2024-11-18 11:38:58.733584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.965 [2024-11-18 11:38:58.748576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.965 [2024-11-18 11:38:58.748611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.965 [2024-11-18 11:38:58.764001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.965 [2024-11-18 11:38:58.764041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.965 [2024-11-18 11:38:58.779583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.965 [2024-11-18 11:38:58.779619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.965 [2024-11-18 11:38:58.794643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.965 [2024-11-18 11:38:58.794680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.965 [2024-11-18 11:38:58.810391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.965 [2024-11-18 11:38:58.810431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.965 [2024-11-18 11:38:58.825974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.965 [2024-11-18 11:38:58.826013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.965 [2024-11-18 11:38:58.841321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.965 [2024-11-18 11:38:58.841360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.225 [2024-11-18 11:38:58.856420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.225 [2024-11-18 11:38:58.856459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.225 [2024-11-18 11:38:58.871488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.225 [2024-11-18 11:38:58.871551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.225 [2024-11-18 11:38:58.887161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.225 [2024-11-18 11:38:58.887201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.225 [2024-11-18 11:38:58.903001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.225 [2024-11-18 11:38:58.903040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.225 [2024-11-18 11:38:58.918202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.225 [2024-11-18 11:38:58.918242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.225 [2024-11-18 11:38:58.933801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.225 [2024-11-18 11:38:58.933842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.225 [2024-11-18 11:38:58.949890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.225 [2024-11-18 11:38:58.949929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.225 [2024-11-18 11:38:58.965799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.225 [2024-11-18 11:38:58.965839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.225 [2024-11-18 11:38:58.981141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.225 [2024-11-18 11:38:58.981180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.225 [2024-11-18 11:38:58.997706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.225 [2024-11-18 11:38:58.997758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.226 [2024-11-18 11:38:59.013035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.226 [2024-11-18 11:38:59.013075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.226 [2024-11-18 11:38:59.028563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.226 [2024-11-18 11:38:59.028607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.226 [2024-11-18 11:38:59.043413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.226 [2024-11-18 11:38:59.043453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.226 [2024-11-18 11:38:59.058431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.226 [2024-11-18 11:38:59.058470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.226 [2024-11-18 11:38:59.073733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.226 [2024-11-18 11:38:59.073783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.226 [2024-11-18 11:38:59.088816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.226 [2024-11-18 11:38:59.088849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.226 [2024-11-18 11:38:59.103676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.226 [2024-11-18 11:38:59.103727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-11-18 11:38:59.118603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-11-18 11:38:59.118641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-11-18 11:38:59.133959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-11-18 11:38:59.133998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-11-18 11:38:59.149365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-11-18 11:38:59.149404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-11-18 11:38:59.164280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-11-18 11:38:59.164320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-11-18 11:38:59.179581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-11-18 11:38:59.179618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-11-18 11:38:59.195078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-11-18 11:38:59.195118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-11-18 11:38:59.211082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-11-18 11:38:59.211123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-11-18 11:38:59.226228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-11-18 11:38:59.226264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-11-18 11:38:59.242842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-11-18 11:38:59.242883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-11-18 11:38:59.258799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-11-18 11:38:59.258839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-11-18 11:38:59.271259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-11-18 11:38:59.271299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-11-18 11:38:59.285942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-11-18 11:38:59.285982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-11-18 11:38:59.301941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-11-18 11:38:59.301983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-11-18 11:38:59.317253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-11-18 11:38:59.317303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-11-18 11:38:59.333083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-11-18 11:38:59.333123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-11-18 11:38:59.348807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-11-18 11:38:59.348846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-11-18 11:38:59.364172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-11-18 11:38:59.364211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.754 8344.33 IOPS, 65.19 MiB/s [2024-11-18T10:38:59.639Z] [2024-11-18 11:38:59.377257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.754 [2024-11-18 11:38:59.377297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.754 [2024-11-18 11:38:59.391749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.754 [2024-11-18 11:38:59.391803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.754 [2024-11-18 11:38:59.407172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.754 [2024-11-18 11:38:59.407211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.754 [2024-11-18 11:38:59.422472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.754 [2024-11-18 11:38:59.422536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.754 [2024-11-18 11:38:59.437972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.754 [2024-11-18 11:38:59.438011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.754 [2024-11-18 11:38:59.453927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.754 [2024-11-18 11:38:59.453966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.754 [2024-11-18 11:38:59.468814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.754 [2024-11-18 11:38:59.468875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.754 [2024-11-18 11:38:59.483557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.754 [2024-11-18 11:38:59.483593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.754 [2024-11-18 11:38:59.498486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.754 [2024-11-18 11:38:59.498550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.754 [2024-11-18 11:38:59.513457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.754 [2024-11-18 11:38:59.513506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.754 [2024-11-18 11:38:59.528906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.754 [2024-11-18 11:38:59.528946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.754 [2024-11-18 11:38:59.544025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.754 [2024-11-18 11:38:59.544078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.754 [2024-11-18 11:38:59.559518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.754 [2024-11-18 11:38:59.559574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.754 [2024-11-18 11:38:59.574407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.754 [2024-11-18 11:38:59.574446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.754 [2024-11-18 11:38:59.589434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.754 [2024-11-18 11:38:59.589473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.754 [2024-11-18 11:38:59.604371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.754 [2024-11-18 11:38:59.604410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.754 [2024-11-18 11:38:59.619889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.754 [2024-11-18 11:38:59.619928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.754 [2024-11-18 11:38:59.632551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.754 [2024-11-18 11:38:59.632604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.014 [2024-11-18 11:38:59.646027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.014 [2024-11-18 11:38:59.646070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.014 [2024-11-18 11:38:59.660900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.014 [2024-11-18 11:38:59.660940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.014 [2024-11-18 11:38:59.675802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.014 [2024-11-18 11:38:59.675842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.014 [2024-11-18 11:38:59.691236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.014 [2024-11-18 11:38:59.691275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.014 [2024-11-18 11:38:59.707138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.014 [2024-11-18 11:38:59.707178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.014 [2024-11-18 11:38:59.722326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.014 [2024-11-18 11:38:59.722365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.014 [2024-11-18 11:38:59.735051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.014 [2024-11-18 11:38:59.735090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.014 [2024-11-18 11:38:59.749895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.014 [2024-11-18 11:38:59.749935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.014 [2024-11-18 11:38:59.764966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.014 [2024-11-18 11:38:59.765006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.015 [2024-11-18 11:38:59.779997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.015 [2024-11-18 11:38:59.780037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.015 [2024-11-18 11:38:59.795162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.015 [2024-11-18 11:38:59.795201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.015 [2024-11-18 11:38:59.809530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.015 [2024-11-18 11:38:59.809583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.015 [2024-11-18 11:38:59.824305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.015 [2024-11-18 11:38:59.824345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.015 [2024-11-18 11:38:59.839097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.015 [2024-11-18 11:38:59.839138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.015 [2024-11-18 11:38:59.854047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.015 [2024-11-18 11:38:59.854087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.015 [2024-11-18 11:38:59.869414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.015 [2024-11-18 11:38:59.869454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.015 [2024-11-18 11:38:59.884788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.015 [2024-11-18 11:38:59.884842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.015 [2024-11-18 11:38:59.900052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.015 [2024-11-18 11:38:59.900092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.273 [2024-11-18 11:38:59.915362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.274 [2024-11-18 11:38:59.915403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.274 [2024-11-18 11:38:59.930682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.274 [2024-11-18 11:38:59.930719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.274 [2024-11-18 11:38:59.946448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.274 [2024-11-18 11:38:59.946488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.274 [2024-11-18 11:38:59.961470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.274 [2024-11-18 11:38:59.961536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.274 [2024-11-18 11:38:59.976523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.274 [2024-11-18 11:38:59.976576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.274 [2024-11-18 11:38:59.991477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.274 [2024-11-18 11:38:59.991526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.274 [2024-11-18 11:39:00.006610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.274 [2024-11-18 11:39:00.006649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.274 [2024-11-18 11:39:00.022673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.274 [2024-11-18 11:39:00.022723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.274 [2024-11-18 11:39:00.038684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.274 [2024-11-18 11:39:00.038726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.274 [2024-11-18 11:39:00.055140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.274 [2024-11-18 11:39:00.055187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.274 [2024-11-18 11:39:00.070855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.274 [2024-11-18 11:39:00.070896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.274 [2024-11-18 11:39:00.086653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.274 [2024-11-18 11:39:00.086690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.274 [2024-11-18 11:39:00.102197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.274 [2024-11-18 11:39:00.102237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.274 [2024-11-18 11:39:00.115475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.274 [2024-11-18 11:39:00.115552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.274 [2024-11-18 11:39:00.131565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.274 [2024-11-18 11:39:00.131611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.274 [2024-11-18 11:39:00.146953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.274 [2024-11-18 11:39:00.146993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.533 [2024-11-18 11:39:00.162266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.533 [2024-11-18 11:39:00.162308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.533 [2024-11-18 11:39:00.178079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.533 [2024-11-18 11:39:00.178120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.533 [2024-11-18 11:39:00.192095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.533 [2024-11-18 11:39:00.192141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.533 [2024-11-18 11:39:00.208159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.533 [2024-11-18 11:39:00.208199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.533 [2024-11-18 11:39:00.223980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.533 [2024-11-18 11:39:00.224020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.533 [2024-11-18 11:39:00.239010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.533 [2024-11-18 11:39:00.239051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.533 [2024-11-18 11:39:00.255275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.533 [2024-11-18 11:39:00.255315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.533 [2024-11-18 11:39:00.271482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.533 [2024-11-18 11:39:00.271552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.533 [2024-11-18 11:39:00.286830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.533 [2024-11-18 11:39:00.286872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.533 [2024-11-18 11:39:00.302203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.533 [2024-11-18 11:39:00.302243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.533 [2024-11-18 11:39:00.317929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.533 [2024-11-18 11:39:00.317970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.533 [2024-11-18 11:39:00.333220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.533 [2024-11-18 11:39:00.333261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.533 [2024-11-18 11:39:00.348748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.533 [2024-11-18 11:39:00.348801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.534 [2024-11-18 11:39:00.363630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.534 [2024-11-18 11:39:00.363669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.534 [2024-11-18 11:39:00.377306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.534 [2024-11-18 11:39:00.377341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.534 8329.75 IOPS, 65.08 MiB/s [2024-11-18T10:39:00.419Z] [2024-11-18 11:39:00.392436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.534 [2024-11-18 11:39:00.392478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.534 [2024-11-18 11:39:00.407102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.534 [2024-11-18 11:39:00.407137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.792 [2024-11-18 11:39:00.422024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.792 [2024-11-18 11:39:00.422064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.792 [2024-11-18 11:39:00.437712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.792 [2024-11-18 11:39:00.437777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.792 [2024-11-18 11:39:00.450272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.792 [2024-11-18 11:39:00.450324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.792 [2024-11-18 11:39:00.463838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.792 [2024-11-18 11:39:00.463875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.792 [2024-11-18 11:39:00.478523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.792 [2024-11-18 11:39:00.478560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.792 [2024-11-18 11:39:00.493935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.792 [2024-11-18 11:39:00.493977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.792 [2024-11-18 11:39:00.507428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.792 [2024-11-18 11:39:00.507477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.792 [2024-11-18 11:39:00.522625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.792 [2024-11-18 11:39:00.522662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.792 [2024-11-18 11:39:00.538224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.792 [2024-11-18 11:39:00.538265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.792 [2024-11-18 11:39:00.550956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.792 [2024-11-18 11:39:00.550997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.792 [2024-11-18 11:39:00.566338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.792 [2024-11-18 11:39:00.566377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.792 [2024-11-18 11:39:00.582080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.792 [2024-11-18 11:39:00.582120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.792 [2024-11-18 11:39:00.595656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.792 [2024-11-18 11:39:00.595693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.792 [2024-11-18 11:39:00.611273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.792 [2024-11-18 11:39:00.611313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.792 [2024-11-18 11:39:00.626610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.792 [2024-11-18 11:39:00.626646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.792 [2024-11-18 11:39:00.642241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.792 [2024-11-18 11:39:00.642281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.792 [2024-11-18 11:39:00.657744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.792 [2024-11-18 11:39:00.657797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.793 [2024-11-18 11:39:00.672975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.793 [2024-11-18 11:39:00.673014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.052 [2024-11-18 11:39:00.686345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.052 [2024-11-18 11:39:00.686385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.052 [2024-11-18 11:39:00.701132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.052 [2024-11-18 11:39:00.701171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.052 [2024-11-18 11:39:00.716832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.052 [2024-11-18 11:39:00.716872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.052 [2024-11-18 11:39:00.730122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.052 [2024-11-18 11:39:00.730171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.052 [2024-11-18 11:39:00.744858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.052 [2024-11-18 11:39:00.744898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.052 [2024-11-18 11:39:00.759970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.052 [2024-11-18 11:39:00.760009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.052 [2024-11-18 11:39:00.775331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.052 [2024-11-18 11:39:00.775370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.052 [2024-11-18 11:39:00.790608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.052 [2024-11-18 11:39:00.790644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.052 [2024-11-18 11:39:00.806112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.052 [2024-11-18 11:39:00.806152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.052 [2024-11-18 11:39:00.821434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.052 [2024-11-18 11:39:00.821474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.052 [2024-11-18 11:39:00.836603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.052 [2024-11-18 11:39:00.836639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.052 [2024-11-18 11:39:00.851038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.052 [2024-11-18 11:39:00.851077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.052 [2024-11-18 11:39:00.866089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.052 [2024-11-18 11:39:00.866128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.052 [2024-11-18 11:39:00.881567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.053 [2024-11-18 11:39:00.881603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.053 [2024-11-18 11:39:00.894826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.053 [2024-11-18 11:39:00.894866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.053 [2024-11-18 11:39:00.909969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.053 [2024-11-18 11:39:00.910008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.053 [2024-11-18 11:39:00.925485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.053 [2024-11-18 11:39:00.925550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.311 [2024-11-18 11:39:00.939951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.311 [2024-11-18 11:39:00.939993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.311 [2024-11-18 11:39:00.955007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.311 [2024-11-18 11:39:00.955050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.311 [2024-11-18 11:39:00.970190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.311 [2024-11-18 11:39:00.970230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.311 [2024-11-18 11:39:00.985077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.311 [2024-11-18 11:39:00.985117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.311 [2024-11-18 11:39:00.999561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.311 [2024-11-18 11:39:00.999597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.311 [2024-11-18 11:39:01.014870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.311 [2024-11-18 11:39:01.014920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.311 [2024-11-18 11:39:01.027422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.311 [2024-11-18 11:39:01.027462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.312 [2024-11-18 11:39:01.040695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.312 [2024-11-18 11:39:01.040731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.312 [2024-11-18 11:39:01.056293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.312 [2024-11-18 11:39:01.056337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.312 [2024-11-18 11:39:01.072293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.312 [2024-11-18 11:39:01.072332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.312 [2024-11-18 11:39:01.087987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.312 [2024-11-18 11:39:01.088028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.312 [2024-11-18 11:39:01.103086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.312 [2024-11-18 11:39:01.103126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.312 [2024-11-18 11:39:01.118729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.312 [2024-11-18 11:39:01.118789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.312 [2024-11-18 11:39:01.134557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.312 [2024-11-18 11:39:01.134593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.312 [2024-11-18 11:39:01.149572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.312 [2024-11-18 11:39:01.149607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.312 [2024-11-18 11:39:01.165251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.312 [2024-11-18 11:39:01.165290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.312 [2024-11-18 11:39:01.180138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.312 [2024-11-18 11:39:01.180177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.312 [2024-11-18 11:39:01.195510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.312 [2024-11-18 11:39:01.195569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.571 [2024-11-18 11:39:01.208962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.571 [2024-11-18 11:39:01.209003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.571 [2024-11-18 11:39:01.224062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.571 [2024-11-18 11:39:01.224102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.571 [2024-11-18 11:39:01.239302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.571 [2024-11-18 11:39:01.239341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.571 [2024-11-18 11:39:01.254584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.571 [2024-11-18 11:39:01.254620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.571 [2024-11-18 11:39:01.269564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.571 [2024-11-18 11:39:01.269601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.571 [2024-11-18 11:39:01.283372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.571 [2024-11-18 11:39:01.283412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.571 [2024-11-18 11:39:01.298181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.571 [2024-11-18 11:39:01.298238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.571 [2024-11-18 11:39:01.313649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.571 [2024-11-18 11:39:01.313686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.571 [2024-11-18 11:39:01.328579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.571 [2024-11-18 11:39:01.328615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.571 [2024-11-18 11:39:01.343213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.571 [2024-11-18 11:39:01.343253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.571 [2024-11-18 11:39:01.358221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.571 [2024-11-18 11:39:01.358261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.571 [2024-11-18 11:39:01.373426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.571 [2024-11-18 11:39:01.373466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.571 8328.00 IOPS, 65.06 MiB/s [2024-11-18T10:39:01.456Z] [2024-11-18 11:39:01.388311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.571 [2024-11-18 11:39:01.388350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.571 [2024-11-18 11:39:01.393965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.571 [2024-11-18 11:39:01.394003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.571 00:10:35.571 Latency(us) 00:10:35.571 [2024-11-18T10:39:01.456Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:35.571 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:35.571 Nvme1n1 : 5.01 8330.73 65.08 0.00 0.00 15336.76 4830.25 24466.77 00:10:35.571 [2024-11-18T10:39:01.456Z] =================================================================================================================== 00:10:35.571 [2024-11-18T10:39:01.456Z] Total : 8330.73 65.08 0.00 0.00 15336.76 4830.25 24466.77 00:10:35.571 [2024-11-18 11:39:01.401986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.571 [2024-11-18 11:39:01.402023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.571 [2024-11-18 11:39:01.410005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.571 [2024-11-18 11:39:01.410043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.571 [2024-11-18 11:39:01.418025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.571 [2024-11-18 11:39:01.418061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.571 [2024-11-18 11:39:01.426041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.571 [2024-11-18 11:39:01.426087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.571 [2024-11-18 11:39:01.434037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.571 [2024-11-18 11:39:01.434071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.571 [2024-11-18 11:39:01.442105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.571 [2024-11-18 11:39:01.442146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.571 [2024-11-18 11:39:01.450166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.571 [2024-11-18 11:39:01.450232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.830 [2024-11-18 11:39:01.458260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.830 [2024-11-18 11:39:01.458319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.830 [2024-11-18 11:39:01.466148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.830 [2024-11-18 11:39:01.466184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.830 [2024-11-18 11:39:01.474149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.830 [2024-11-18 11:39:01.474182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.830 [2024-11-18 11:39:01.482194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.830 [2024-11-18 11:39:01.482228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.830 [2024-11-18 11:39:01.490216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.830 [2024-11-18 11:39:01.490251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.830 [2024-11-18 11:39:01.498216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.830 [2024-11-18 11:39:01.498249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.830 [2024-11-18 11:39:01.506292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.830 [2024-11-18 11:39:01.506327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.830 [2024-11-18 11:39:01.514271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.830 [2024-11-18 11:39:01.514303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.830 [2024-11-18 11:39:01.522349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.830 [2024-11-18 11:39:01.522383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.830 [2024-11-18 11:39:01.530333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.830 [2024-11-18 11:39:01.530368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.830 [2024-11-18 11:39:01.538398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-11-18 11:39:01.538453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-11-18 11:39:01.546449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-11-18 11:39:01.546527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-11-18 11:39:01.554513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-11-18 11:39:01.554587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-11-18 11:39:01.562420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-11-18 11:39:01.562453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-11-18 11:39:01.570453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-11-18 11:39:01.570487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-11-18 11:39:01.578449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-11-18 11:39:01.578482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-11-18 11:39:01.586511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-11-18 11:39:01.586558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-11-18 11:39:01.594547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-11-18 11:39:01.594583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-11-18 11:39:01.602577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-11-18 11:39:01.602606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-11-18 11:39:01.610579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-11-18 11:39:01.610608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-11-18 11:39:01.618617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-11-18 11:39:01.618646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-11-18 11:39:01.626601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-11-18 11:39:01.626629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-11-18 11:39:01.634645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-11-18 11:39:01.634674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-11-18 11:39:01.642648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-11-18 11:39:01.642676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-11-18 11:39:01.650686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-11-18 11:39:01.650715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-11-18 11:39:01.658712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-11-18 11:39:01.658742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-11-18 11:39:01.666744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-11-18 11:39:01.666800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-11-18 11:39:01.674750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-11-18 11:39:01.674797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-11-18 11:39:01.682817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-11-18 11:39:01.682853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-11-18 11:39:01.690828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-11-18 11:39:01.690864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-11-18 11:39:01.698859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-11-18 11:39:01.698894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-11-18 11:39:01.706934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-11-18 11:39:01.706997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.831 [2024-11-18 11:39:01.714993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.831 [2024-11-18 11:39:01.715051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.089 [2024-11-18 11:39:01.722926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.089 [2024-11-18 11:39:01.722961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.089 [2024-11-18 11:39:01.730927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.089 [2024-11-18 11:39:01.730960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.089 [2024-11-18 11:39:01.738978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.089 [2024-11-18 11:39:01.739013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.089 [2024-11-18 11:39:01.746996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.089 [2024-11-18 11:39:01.747030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.089 [2024-11-18 11:39:01.754987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.089 [2024-11-18 11:39:01.755029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.089 [2024-11-18 11:39:01.763046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.089 [2024-11-18 11:39:01.763091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.089 [2024-11-18 11:39:01.779240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.089 [2024-11-18 11:39:01.779316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.089 [2024-11-18 11:39:01.787244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.089 [2024-11-18 11:39:01.787308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.089 [2024-11-18 11:39:01.795124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.089 [2024-11-18 11:39:01.795158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.089 [2024-11-18 11:39:01.803145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.089 [2024-11-18 11:39:01.803178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.089 [2024-11-18 11:39:01.811172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.089 [2024-11-18 11:39:01.811207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.089 [2024-11-18 11:39:01.819197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.089 [2024-11-18 11:39:01.819231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.089 [2024-11-18 11:39:01.827192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.089 [2024-11-18 11:39:01.827224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.089 [2024-11-18 11:39:01.835247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.089 [2024-11-18 11:39:01.835281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.089 [2024-11-18 11:39:01.843246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.089 [2024-11-18 11:39:01.843278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.089 [2024-11-18 11:39:01.851286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.089 [2024-11-18 11:39:01.851320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.089 [2024-11-18 11:39:01.859304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.089 [2024-11-18 11:39:01.859338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.089 [2024-11-18 11:39:01.867300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.089 [2024-11-18 11:39:01.867333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.089 [2024-11-18 11:39:01.875355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.089 [2024-11-18 11:39:01.875387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.089 [2024-11-18 11:39:01.883375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.089 [2024-11-18 11:39:01.883408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.089 [2024-11-18 11:39:01.891390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.089 [2024-11-18 11:39:01.891432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.089 [2024-11-18 11:39:01.899423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.089 [2024-11-18 11:39:01.899457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.089 [2024-11-18 11:39:01.907419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.089 [2024-11-18 11:39:01.907452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.089 [2024-11-18 11:39:01.915467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.089 [2024-11-18 11:39:01.915511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.089 [2024-11-18 11:39:01.923504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.089 [2024-11-18 11:39:01.923561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.089 [2024-11-18 11:39:01.931511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.089 [2024-11-18 11:39:01.931558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.089 [2024-11-18 11:39:01.939537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.089 [2024-11-18 11:39:01.939566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.089 [2024-11-18 11:39:01.947680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.089 [2024-11-18 11:39:01.947750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.089 [2024-11-18 11:39:01.955626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.089 [2024-11-18 11:39:01.955672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.089 [2024-11-18 11:39:01.963637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.089 [2024-11-18 11:39:01.963668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.089 [2024-11-18 11:39:01.971624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.090 [2024-11-18 11:39:01.971653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.351 [2024-11-18 11:39:01.979657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.351 [2024-11-18 11:39:01.979687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.351 [2024-11-18 11:39:01.987677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.351 [2024-11-18 11:39:01.987707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.351 [2024-11-18 11:39:01.995696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.351 [2024-11-18 11:39:01.995725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.351 [2024-11-18 11:39:02.003714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.351 [2024-11-18 11:39:02.003744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.351 [2024-11-18 11:39:02.011734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.351 [2024-11-18 11:39:02.011764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.351 [2024-11-18 11:39:02.019737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.351 [2024-11-18 11:39:02.019781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.351 [2024-11-18 11:39:02.027802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.351 [2024-11-18 11:39:02.027836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.351 [2024-11-18 11:39:02.035803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.351 [2024-11-18 11:39:02.035836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.351 [2024-11-18 11:39:02.043862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.351 [2024-11-18 11:39:02.043895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.351 [2024-11-18 11:39:02.051882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.351 [2024-11-18 11:39:02.051916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.351 [2024-11-18 11:39:02.059878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.351 [2024-11-18 11:39:02.059911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.351 [2024-11-18 11:39:02.068004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.351 [2024-11-18 11:39:02.068060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.351 [2024-11-18 11:39:02.075998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.351 [2024-11-18 11:39:02.076054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.351 [2024-11-18 11:39:02.083938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.351 [2024-11-18 11:39:02.083971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.351 [2024-11-18 11:39:02.092053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.351 [2024-11-18 11:39:02.092086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.351 [2024-11-18 11:39:02.099988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.351 [2024-11-18 11:39:02.100020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.351 [2024-11-18 11:39:02.108028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.351 [2024-11-18 11:39:02.108062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.351 [2024-11-18 11:39:02.116058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.351 [2024-11-18 11:39:02.116091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.351 [2024-11-18 11:39:02.124059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.351 [2024-11-18 11:39:02.124091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.351 [2024-11-18 11:39:02.132098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.351 [2024-11-18 11:39:02.132132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.351 [2024-11-18 11:39:02.140122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.351 [2024-11-18 11:39:02.140155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.351 [2024-11-18 11:39:02.148124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.351 [2024-11-18 11:39:02.148156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.351 [2024-11-18 11:39:02.156154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.351 [2024-11-18 11:39:02.156183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.351 [2024-11-18 11:39:02.164263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.351 [2024-11-18 11:39:02.164330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.351 [2024-11-18 11:39:02.172233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.351 [2024-11-18 11:39:02.172267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.351 [2024-11-18 11:39:02.180241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.351 [2024-11-18 11:39:02.180274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.351 [2024-11-18 11:39:02.188267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.351 [2024-11-18 11:39:02.188301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.351 [2024-11-18 11:39:02.196283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.351 [2024-11-18 11:39:02.196316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.351 [2024-11-18 11:39:02.204306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.351 [2024-11-18 11:39:02.204339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.351 [2024-11-18 11:39:02.212311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.351 [2024-11-18 11:39:02.212343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.351 [2024-11-18 11:39:02.220351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.351 [2024-11-18 11:39:02.220384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.351 [2024-11-18 11:39:02.228350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.351 [2024-11-18 11:39:02.228390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.351 [2024-11-18 11:39:02.236399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.351 [2024-11-18 11:39:02.236433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.612 [2024-11-18 11:39:02.244426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.612 [2024-11-18 11:39:02.244460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.612 [2024-11-18 11:39:02.252428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.612 [2024-11-18 11:39:02.252461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.612 [2024-11-18 11:39:02.260502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.612 [2024-11-18 11:39:02.260555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.612 [2024-11-18 11:39:02.268497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.612 [2024-11-18 11:39:02.268544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.612 [2024-11-18 11:39:02.276501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.612 [2024-11-18 11:39:02.276548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.612 [2024-11-18 11:39:02.284576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.612 [2024-11-18 11:39:02.284606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2875754) - No such process 00:10:36.612 11:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2875754 00:10:36.612 11:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.612 11:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.612 11:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.612 11:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.612 11:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:36.612 11:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.612 11:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.612 delay0 00:10:36.612 11:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.612 11:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:36.612 11:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.612 11:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.612 11:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.612 11:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:36.612 [2024-11-18 11:39:02.432088] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:44.738 Initializing NVMe Controllers 00:10:44.738 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:44.738 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:44.738 Initialization complete. Launching workers. 00:10:44.738 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 293, failed: 7374 00:10:44.738 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 7609, failed to submit 58 00:10:44.738 success 7450, unsuccessful 159, failed 0 00:10:44.738 11:39:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:44.738 11:39:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:44.738 11:39:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:44.738 11:39:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:44.738 11:39:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:44.738 11:39:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:44.738 11:39:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:44.738 11:39:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:44.738 rmmod nvme_tcp 00:10:44.738 rmmod nvme_fabrics 00:10:44.738 rmmod nvme_keyring 00:10:44.738 11:39:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:44.738 11:39:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:44.738 11:39:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:44.738 11:39:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2874264 ']' 00:10:44.738 11:39:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2874264 00:10:44.738 11:39:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2874264 ']' 00:10:44.738 11:39:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2874264 00:10:44.738 11:39:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:44.738 11:39:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:44.738 11:39:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2874264 00:10:44.738 11:39:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:44.738 11:39:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:44.738 11:39:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2874264' 00:10:44.738 killing process with pid 2874264 00:10:44.738 11:39:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2874264 00:10:44.738 11:39:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2874264 00:10:45.307 11:39:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:45.307 11:39:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:45.307 11:39:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:45.307 11:39:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:45.307 11:39:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:45.307 11:39:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:45.307 11:39:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:45.307 11:39:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:45.307 11:39:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:45.307 11:39:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.307 11:39:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.307 11:39:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.211 11:39:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:47.211 00:10:47.211 real 0m32.904s 00:10:47.211 user 0m48.726s 00:10:47.211 sys 0m8.889s 00:10:47.211 11:39:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.211 11:39:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:47.211 ************************************ 00:10:47.211 END TEST nvmf_zcopy 00:10:47.211 ************************************ 00:10:47.211 11:39:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:47.211 11:39:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:47.211 11:39:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.211 11:39:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:47.211 ************************************ 00:10:47.211 START TEST nvmf_nmic 00:10:47.211 ************************************ 00:10:47.211 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:47.211 * Looking for test storage... 00:10:47.211 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:47.211 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:47.211 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:47.211 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:47.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.470 --rc genhtml_branch_coverage=1 00:10:47.470 --rc genhtml_function_coverage=1 00:10:47.470 --rc genhtml_legend=1 00:10:47.470 --rc geninfo_all_blocks=1 00:10:47.470 --rc geninfo_unexecuted_blocks=1 00:10:47.470 00:10:47.470 ' 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:47.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.470 --rc genhtml_branch_coverage=1 00:10:47.470 --rc genhtml_function_coverage=1 00:10:47.470 --rc genhtml_legend=1 00:10:47.470 --rc geninfo_all_blocks=1 00:10:47.470 --rc geninfo_unexecuted_blocks=1 00:10:47.470 00:10:47.470 ' 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:47.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.470 --rc genhtml_branch_coverage=1 00:10:47.470 --rc genhtml_function_coverage=1 00:10:47.470 --rc genhtml_legend=1 00:10:47.470 --rc geninfo_all_blocks=1 00:10:47.470 --rc geninfo_unexecuted_blocks=1 00:10:47.470 00:10:47.470 ' 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:47.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.470 --rc genhtml_branch_coverage=1 00:10:47.470 --rc genhtml_function_coverage=1 00:10:47.470 --rc genhtml_legend=1 00:10:47.470 --rc geninfo_all_blocks=1 00:10:47.470 --rc geninfo_unexecuted_blocks=1 00:10:47.470 00:10:47.470 ' 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:47.470 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:47.471 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.471 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.471 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.471 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:47.471 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.471 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:47.471 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:47.471 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:47.471 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:47.471 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:47.471 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:47.471 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:47.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:47.471 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:47.471 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:47.471 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:47.471 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:47.471 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:47.471 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:47.471 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:47.471 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:47.471 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:47.471 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:47.471 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:47.471 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.471 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.471 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.471 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:47.471 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:47.471 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:47.471 11:39:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:49.379 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:49.379 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:49.379 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:49.379 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:49.379 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:49.380 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:49.380 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:49.380 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:49.380 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:49.380 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:49.380 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:49.380 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:49.380 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:49.380 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:49.380 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:49.380 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:49.380 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:49.638 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:49.638 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:49.638 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:49.638 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:49.638 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:49.638 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:49.638 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:49.638 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:49.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:49.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:10:49.638 00:10:49.638 --- 10.0.0.2 ping statistics --- 00:10:49.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.638 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:10:49.638 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:49.638 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:49.638 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:10:49.638 00:10:49.638 --- 10.0.0.1 ping statistics --- 00:10:49.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.638 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:10:49.638 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:49.638 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:49.638 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:49.638 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:49.638 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:49.638 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:49.638 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:49.638 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:49.638 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:49.638 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:49.638 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:49.638 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:49.638 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:49.639 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2880162 00:10:49.639 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:49.639 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2880162 00:10:49.639 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2880162 ']' 00:10:49.639 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.639 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:49.639 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.639 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:49.639 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:49.639 [2024-11-18 11:39:15.460746] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:49.639 [2024-11-18 11:39:15.460910] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:49.898 [2024-11-18 11:39:15.611898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:49.898 [2024-11-18 11:39:15.758508] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:49.898 [2024-11-18 11:39:15.758604] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:49.898 [2024-11-18 11:39:15.758639] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:49.898 [2024-11-18 11:39:15.758663] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:49.898 [2024-11-18 11:39:15.758683] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:49.898 [2024-11-18 11:39:15.761570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.898 [2024-11-18 11:39:15.761628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:49.898 [2024-11-18 11:39:15.761680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.898 [2024-11-18 11:39:15.761687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:50.835 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:50.835 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:50.836 [2024-11-18 11:39:16.480004] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:50.836 Malloc0 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:50.836 [2024-11-18 11:39:16.594738] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:50.836 test case1: single bdev can't be used in multiple subsystems 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:50.836 [2024-11-18 11:39:16.618462] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:50.836 [2024-11-18 11:39:16.618530] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:50.836 [2024-11-18 11:39:16.618575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.836 request: 00:10:50.836 { 00:10:50.836 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:50.836 "namespace": { 00:10:50.836 "bdev_name": "Malloc0", 00:10:50.836 "no_auto_visible": false 00:10:50.836 }, 00:10:50.836 "method": "nvmf_subsystem_add_ns", 00:10:50.836 "req_id": 1 00:10:50.836 } 00:10:50.836 Got JSON-RPC error response 00:10:50.836 response: 00:10:50.836 { 00:10:50.836 "code": -32602, 00:10:50.836 "message": "Invalid parameters" 00:10:50.836 } 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:50.836 Adding namespace failed - expected result. 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:50.836 test case2: host connect to nvmf target in multiple paths 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:50.836 [2024-11-18 11:39:16.626641] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.836 11:39:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:51.776 11:39:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:52.345 11:39:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:52.345 11:39:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:52.345 11:39:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:52.345 11:39:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:52.345 11:39:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:54.251 11:39:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:54.251 11:39:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:54.251 11:39:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:54.251 11:39:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:54.251 11:39:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:54.251 11:39:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:54.251 11:39:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:54.251 [global] 00:10:54.251 thread=1 00:10:54.251 invalidate=1 00:10:54.251 rw=write 00:10:54.251 time_based=1 00:10:54.251 runtime=1 00:10:54.251 ioengine=libaio 00:10:54.251 direct=1 00:10:54.251 bs=4096 00:10:54.251 iodepth=1 00:10:54.251 norandommap=0 00:10:54.251 numjobs=1 00:10:54.251 00:10:54.251 verify_dump=1 00:10:54.251 verify_backlog=512 00:10:54.251 verify_state_save=0 00:10:54.251 do_verify=1 00:10:54.251 verify=crc32c-intel 00:10:54.251 [job0] 00:10:54.251 filename=/dev/nvme0n1 00:10:54.251 Could not set queue depth (nvme0n1) 00:10:54.509 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:54.509 fio-3.35 00:10:54.509 Starting 1 thread 00:10:55.884 00:10:55.884 job0: (groupid=0, jobs=1): err= 0: pid=2880810: Mon Nov 18 11:39:21 2024 00:10:55.884 read: IOPS=1711, BW=6845KiB/s (7009kB/s)(6852KiB/1001msec) 00:10:55.884 slat (nsec): min=5464, max=49259, avg=12308.92, stdev=5715.88 00:10:55.884 clat (usec): min=229, max=933, avg=293.02, stdev=41.79 00:10:55.884 lat (usec): min=239, max=943, avg=305.33, stdev=41.94 00:10:55.884 clat percentiles (usec): 00:10:55.884 | 1.00th=[ 239], 5.00th=[ 249], 10.00th=[ 255], 20.00th=[ 265], 00:10:55.884 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 297], 00:10:55.884 | 70.00th=[ 302], 80.00th=[ 310], 90.00th=[ 326], 95.00th=[ 347], 00:10:55.884 | 99.00th=[ 445], 99.50th=[ 478], 99.90th=[ 775], 99.95th=[ 938], 00:10:55.884 | 99.99th=[ 938] 00:10:55.884 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:55.884 slat (usec): min=7, max=28422, avg=29.12, stdev=627.78 00:10:55.884 clat (usec): min=156, max=409, avg=196.94, stdev=23.45 00:10:55.884 lat (usec): min=163, max=28832, avg=226.06, stdev=633.07 00:10:55.884 clat percentiles (usec): 00:10:55.884 | 1.00th=[ 167], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 178], 00:10:55.884 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 194], 60.00th=[ 200], 00:10:55.884 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 221], 95.00th=[ 231], 00:10:55.884 | 99.00th=[ 289], 99.50th=[ 318], 99.90th=[ 334], 99.95th=[ 334], 00:10:55.884 | 99.99th=[ 408] 00:10:55.884 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:10:55.884 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:55.884 lat (usec) : 250=55.94%, 500=43.87%, 750=0.13%, 1000=0.05% 00:10:55.884 cpu : usr=3.50%, sys=7.30%, ctx=3765, majf=0, minf=1 00:10:55.884 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:55.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.884 issued rwts: total=1713,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.884 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:55.884 00:10:55.884 Run status group 0 (all jobs): 00:10:55.884 READ: bw=6845KiB/s (7009kB/s), 6845KiB/s-6845KiB/s (7009kB/s-7009kB/s), io=6852KiB (7016kB), run=1001-1001msec 00:10:55.884 WRITE: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:10:55.884 00:10:55.884 Disk stats (read/write): 00:10:55.884 nvme0n1: ios=1562/1857, merge=0/0, ticks=1414/318, in_queue=1732, util=98.50% 00:10:55.884 11:39:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:55.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:55.884 11:39:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:55.884 11:39:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:55.884 11:39:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:55.884 11:39:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:55.884 11:39:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:55.884 11:39:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:55.884 11:39:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:55.884 11:39:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:55.884 11:39:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:55.884 11:39:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:55.884 11:39:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:55.884 11:39:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:55.884 11:39:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:55.884 11:39:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:55.884 11:39:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:55.884 rmmod nvme_tcp 00:10:55.884 rmmod nvme_fabrics 00:10:55.884 rmmod nvme_keyring 00:10:55.884 11:39:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:56.143 11:39:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:56.143 11:39:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:56.143 11:39:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2880162 ']' 00:10:56.143 11:39:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2880162 00:10:56.143 11:39:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2880162 ']' 00:10:56.143 11:39:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2880162 00:10:56.143 11:39:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:56.143 11:39:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:56.143 11:39:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2880162 00:10:56.143 11:39:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:56.143 11:39:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:56.143 11:39:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2880162' 00:10:56.143 killing process with pid 2880162 00:10:56.143 11:39:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2880162 00:10:56.143 11:39:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2880162 00:10:57.523 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:57.523 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:57.523 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:57.523 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:57.523 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:57.523 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:57.523 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:57.523 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:57.523 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:57.523 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.523 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.523 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:59.496 00:10:59.496 real 0m12.101s 00:10:59.496 user 0m29.081s 00:10:59.496 sys 0m2.756s 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:59.496 ************************************ 00:10:59.496 END TEST nvmf_nmic 00:10:59.496 ************************************ 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:59.496 ************************************ 00:10:59.496 START TEST nvmf_fio_target 00:10:59.496 ************************************ 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:59.496 * Looking for test storage... 00:10:59.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:59.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.496 --rc genhtml_branch_coverage=1 00:10:59.496 --rc genhtml_function_coverage=1 00:10:59.496 --rc genhtml_legend=1 00:10:59.496 --rc geninfo_all_blocks=1 00:10:59.496 --rc geninfo_unexecuted_blocks=1 00:10:59.496 00:10:59.496 ' 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:59.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.496 --rc genhtml_branch_coverage=1 00:10:59.496 --rc genhtml_function_coverage=1 00:10:59.496 --rc genhtml_legend=1 00:10:59.496 --rc geninfo_all_blocks=1 00:10:59.496 --rc geninfo_unexecuted_blocks=1 00:10:59.496 00:10:59.496 ' 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:59.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.496 --rc genhtml_branch_coverage=1 00:10:59.496 --rc genhtml_function_coverage=1 00:10:59.496 --rc genhtml_legend=1 00:10:59.496 --rc geninfo_all_blocks=1 00:10:59.496 --rc geninfo_unexecuted_blocks=1 00:10:59.496 00:10:59.496 ' 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:59.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.496 --rc genhtml_branch_coverage=1 00:10:59.496 --rc genhtml_function_coverage=1 00:10:59.496 --rc genhtml_legend=1 00:10:59.496 --rc geninfo_all_blocks=1 00:10:59.496 --rc geninfo_unexecuted_blocks=1 00:10:59.496 00:10:59.496 ' 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.496 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:59.497 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.497 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:59.497 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:59.497 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:59.497 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.497 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.497 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.497 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:59.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:59.497 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:59.497 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:59.497 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:59.497 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:59.497 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:59.497 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:59.497 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:59.497 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:59.497 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:59.497 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:59.497 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:59.497 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:59.497 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.497 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.497 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.497 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:59.497 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:59.497 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:59.497 11:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:01.404 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:01.404 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:01.404 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:01.404 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:01.404 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:01.664 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:01.664 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:01.664 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:01.664 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:01.664 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:01.664 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:01.664 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:01.664 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:01.664 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:01.664 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:01.664 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:01.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:01.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:11:01.664 00:11:01.664 --- 10.0.0.2 ping statistics --- 00:11:01.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.664 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:11:01.665 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:01.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:01.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:11:01.665 00:11:01.665 --- 10.0.0.1 ping statistics --- 00:11:01.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.665 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:11:01.665 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:01.665 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:11:01.665 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:01.665 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:01.665 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:01.665 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:01.665 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:01.665 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:01.665 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:01.665 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:01.665 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:01.665 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:01.665 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.665 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2883033 00:11:01.665 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2883033 00:11:01.665 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:01.665 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2883033 ']' 00:11:01.665 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.665 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.665 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.665 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.665 11:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.665 [2024-11-18 11:39:27.539115] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:01.665 [2024-11-18 11:39:27.539255] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.923 [2024-11-18 11:39:27.686298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:02.182 [2024-11-18 11:39:27.827260] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:02.182 [2024-11-18 11:39:27.827347] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:02.182 [2024-11-18 11:39:27.827374] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:02.182 [2024-11-18 11:39:27.827398] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:02.182 [2024-11-18 11:39:27.827420] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:02.182 [2024-11-18 11:39:27.830339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:02.182 [2024-11-18 11:39:27.830397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:02.182 [2024-11-18 11:39:27.830449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.182 [2024-11-18 11:39:27.830456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:02.748 11:39:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.748 11:39:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:02.749 11:39:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:02.749 11:39:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:02.749 11:39:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.749 11:39:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:02.749 11:39:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:03.007 [2024-11-18 11:39:28.767350] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:03.007 11:39:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:03.573 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:03.573 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:03.830 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:03.830 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:04.088 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:04.088 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:04.346 11:39:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:04.346 11:39:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:04.914 11:39:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:05.174 11:39:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:05.174 11:39:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:05.432 11:39:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:05.432 11:39:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:05.691 11:39:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:05.691 11:39:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:06.259 11:39:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:06.259 11:39:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:06.259 11:39:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:06.517 11:39:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:06.517 11:39:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:06.776 11:39:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:07.036 [2024-11-18 11:39:32.906903] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.297 11:39:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:07.556 11:39:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:07.814 11:39:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:08.381 11:39:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:08.381 11:39:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:08.381 11:39:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:08.381 11:39:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:08.382 11:39:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:08.382 11:39:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:10.286 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:10.286 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:10.286 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:10.286 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:10.286 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:10.286 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:10.286 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:10.286 [global] 00:11:10.286 thread=1 00:11:10.286 invalidate=1 00:11:10.286 rw=write 00:11:10.286 time_based=1 00:11:10.286 runtime=1 00:11:10.286 ioengine=libaio 00:11:10.286 direct=1 00:11:10.286 bs=4096 00:11:10.286 iodepth=1 00:11:10.286 norandommap=0 00:11:10.286 numjobs=1 00:11:10.286 00:11:10.286 verify_dump=1 00:11:10.286 verify_backlog=512 00:11:10.286 verify_state_save=0 00:11:10.286 do_verify=1 00:11:10.286 verify=crc32c-intel 00:11:10.286 [job0] 00:11:10.286 filename=/dev/nvme0n1 00:11:10.286 [job1] 00:11:10.286 filename=/dev/nvme0n2 00:11:10.286 [job2] 00:11:10.286 filename=/dev/nvme0n3 00:11:10.286 [job3] 00:11:10.286 filename=/dev/nvme0n4 00:11:10.545 Could not set queue depth (nvme0n1) 00:11:10.545 Could not set queue depth (nvme0n2) 00:11:10.545 Could not set queue depth (nvme0n3) 00:11:10.545 Could not set queue depth (nvme0n4) 00:11:10.545 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.545 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.545 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.545 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.545 fio-3.35 00:11:10.545 Starting 4 threads 00:11:11.924 00:11:11.924 job0: (groupid=0, jobs=1): err= 0: pid=2884238: Mon Nov 18 11:39:37 2024 00:11:11.924 read: IOPS=1612, BW=6450KiB/s (6604kB/s)(6456KiB/1001msec) 00:11:11.924 slat (nsec): min=5760, max=68320, avg=14027.41, stdev=5738.35 00:11:11.925 clat (usec): min=236, max=1267, avg=291.17, stdev=57.43 00:11:11.925 lat (usec): min=244, max=1283, avg=305.20, stdev=59.82 00:11:11.925 clat percentiles (usec): 00:11:11.925 | 1.00th=[ 243], 5.00th=[ 249], 10.00th=[ 253], 20.00th=[ 262], 00:11:11.925 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 289], 00:11:11.925 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 318], 95.00th=[ 351], 00:11:11.925 | 99.00th=[ 529], 99.50th=[ 570], 99.90th=[ 1221], 99.95th=[ 1270], 00:11:11.925 | 99.99th=[ 1270] 00:11:11.925 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:11.925 slat (nsec): min=7622, max=57178, avg=18379.98, stdev=7537.81 00:11:11.925 clat (usec): min=170, max=1125, avg=221.54, stdev=51.05 00:11:11.925 lat (usec): min=180, max=1146, avg=239.92, stdev=52.51 00:11:11.925 clat percentiles (usec): 00:11:11.925 | 1.00th=[ 178], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 204], 00:11:11.925 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 215], 60.00th=[ 219], 00:11:11.925 | 70.00th=[ 225], 80.00th=[ 235], 90.00th=[ 253], 95.00th=[ 269], 00:11:11.925 | 99.00th=[ 355], 99.50th=[ 400], 99.90th=[ 979], 99.95th=[ 1090], 00:11:11.925 | 99.99th=[ 1123] 00:11:11.925 bw ( KiB/s): min= 8192, max= 8192, per=58.86%, avg=8192.00, stdev= 0.00, samples=1 00:11:11.925 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:11.925 lat (usec) : 250=52.65%, 500=46.45%, 750=0.68%, 1000=0.11% 00:11:11.925 lat (msec) : 2=0.11% 00:11:11.925 cpu : usr=4.20%, sys=8.20%, ctx=3663, majf=0, minf=1 00:11:11.925 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:11.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.925 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.925 issued rwts: total=1614,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.925 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:11.925 job1: (groupid=0, jobs=1): err= 0: pid=2884239: Mon Nov 18 11:39:37 2024 00:11:11.925 read: IOPS=20, BW=83.0KiB/s (85.0kB/s)(84.0KiB/1012msec) 00:11:11.925 slat (nsec): min=15355, max=34160, avg=25048.52, stdev=8643.04 00:11:11.925 clat (usec): min=40860, max=42062, avg=41403.34, stdev=509.36 00:11:11.925 lat (usec): min=40894, max=42079, avg=41428.39, stdev=509.95 00:11:11.925 clat percentiles (usec): 00:11:11.925 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:11.925 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[42206], 00:11:11.925 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:11.925 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:11.925 | 99.99th=[42206] 00:11:11.925 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:11:11.925 slat (nsec): min=5950, max=49735, avg=12705.84, stdev=6276.46 00:11:11.925 clat (usec): min=169, max=471, avg=260.12, stdev=62.17 00:11:11.925 lat (usec): min=176, max=480, avg=272.83, stdev=62.66 00:11:11.925 clat percentiles (usec): 00:11:11.925 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 202], 00:11:11.925 | 30.00th=[ 215], 40.00th=[ 241], 50.00th=[ 262], 60.00th=[ 269], 00:11:11.925 | 70.00th=[ 277], 80.00th=[ 302], 90.00th=[ 375], 95.00th=[ 392], 00:11:11.925 | 99.00th=[ 400], 99.50th=[ 416], 99.90th=[ 474], 99.95th=[ 474], 00:11:11.925 | 99.99th=[ 474] 00:11:11.925 bw ( KiB/s): min= 4096, max= 4096, per=29.43%, avg=4096.00, stdev= 0.00, samples=1 00:11:11.925 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:11.925 lat (usec) : 250=42.78%, 500=53.28% 00:11:11.925 lat (msec) : 50=3.94% 00:11:11.925 cpu : usr=0.20%, sys=0.69%, ctx=533, majf=0, minf=2 00:11:11.925 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:11.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.925 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.925 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.925 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:11.925 job2: (groupid=0, jobs=1): err= 0: pid=2884240: Mon Nov 18 11:39:37 2024 00:11:11.925 read: IOPS=21, BW=85.4KiB/s (87.5kB/s)(88.0KiB/1030msec) 00:11:11.925 slat (nsec): min=10828, max=47287, avg=28124.50, stdev=10424.92 00:11:11.925 clat (usec): min=458, max=42038, avg=39639.48, stdev=8764.20 00:11:11.925 lat (usec): min=480, max=42074, avg=39667.60, stdev=8765.79 00:11:11.925 clat percentiles (usec): 00:11:11.925 | 1.00th=[ 457], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:11.925 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:11:11.925 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:11.925 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:11.925 | 99.99th=[42206] 00:11:11.925 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:11:11.925 slat (nsec): min=8315, max=46356, avg=16811.21, stdev=7956.76 00:11:11.925 clat (usec): min=209, max=414, avg=285.13, stdev=36.88 00:11:11.925 lat (usec): min=221, max=428, avg=301.95, stdev=36.81 00:11:11.925 clat percentiles (usec): 00:11:11.925 | 1.00th=[ 221], 5.00th=[ 237], 10.00th=[ 249], 20.00th=[ 262], 00:11:11.925 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 285], 00:11:11.925 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 322], 95.00th=[ 392], 00:11:11.925 | 99.00th=[ 404], 99.50th=[ 408], 99.90th=[ 416], 99.95th=[ 416], 00:11:11.925 | 99.99th=[ 416] 00:11:11.925 bw ( KiB/s): min= 4096, max= 4096, per=29.43%, avg=4096.00, stdev= 0.00, samples=1 00:11:11.925 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:11.925 lat (usec) : 250=10.49%, 500=85.58% 00:11:11.925 lat (msec) : 50=3.93% 00:11:11.925 cpu : usr=0.49%, sys=1.07%, ctx=536, majf=0, minf=1 00:11:11.925 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:11.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.925 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.925 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.925 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:11.925 job3: (groupid=0, jobs=1): err= 0: pid=2884241: Mon Nov 18 11:39:37 2024 00:11:11.925 read: IOPS=20, BW=83.0KiB/s (85.0kB/s)(84.0KiB/1012msec) 00:11:11.925 slat (nsec): min=10080, max=35851, avg=27616.00, stdev=9539.04 00:11:11.925 clat (usec): min=454, max=42025, avg=39745.51, stdev=9011.57 00:11:11.925 lat (usec): min=474, max=42043, avg=39773.12, stdev=9013.42 00:11:11.925 clat percentiles (usec): 00:11:11.925 | 1.00th=[ 453], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:11.925 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:11:11.925 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:11.925 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:11.925 | 99.99th=[42206] 00:11:11.925 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:11:11.925 slat (usec): min=8, max=31463, avg=79.52, stdev=1390.25 00:11:11.925 clat (usec): min=184, max=591, avg=260.29, stdev=51.04 00:11:11.925 lat (usec): min=196, max=31725, avg=339.81, stdev=1391.43 00:11:11.925 clat percentiles (usec): 00:11:11.925 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:11:11.925 | 30.00th=[ 227], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 269], 00:11:11.925 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 314], 95.00th=[ 343], 00:11:11.925 | 99.00th=[ 408], 99.50th=[ 469], 99.90th=[ 594], 99.95th=[ 594], 00:11:11.925 | 99.99th=[ 594] 00:11:11.925 bw ( KiB/s): min= 4096, max= 4096, per=29.43%, avg=4096.00, stdev= 0.00, samples=1 00:11:11.925 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:11.925 lat (usec) : 250=35.65%, 500=60.23%, 750=0.38% 00:11:11.925 lat (msec) : 50=3.75% 00:11:11.925 cpu : usr=0.40%, sys=1.29%, ctx=536, majf=0, minf=1 00:11:11.925 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:11.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.925 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.925 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.926 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:11.926 00:11:11.926 Run status group 0 (all jobs): 00:11:11.926 READ: bw=6517KiB/s (6673kB/s), 83.0KiB/s-6450KiB/s (85.0kB/s-6604kB/s), io=6712KiB (6873kB), run=1001-1030msec 00:11:11.926 WRITE: bw=13.6MiB/s (14.3MB/s), 1988KiB/s-8184KiB/s (2036kB/s-8380kB/s), io=14.0MiB (14.7MB), run=1001-1030msec 00:11:11.926 00:11:11.926 Disk stats (read/write): 00:11:11.926 nvme0n1: ios=1549/1536, merge=0/0, ticks=976/327, in_queue=1303, util=85.77% 00:11:11.926 nvme0n2: ios=67/512, merge=0/0, ticks=760/133, in_queue=893, util=90.75% 00:11:11.926 nvme0n3: ios=74/512, merge=0/0, ticks=838/138, in_queue=976, util=93.53% 00:11:11.926 nvme0n4: ios=70/512, merge=0/0, ticks=988/124, in_queue=1112, util=95.06% 00:11:11.926 11:39:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:11.926 [global] 00:11:11.926 thread=1 00:11:11.926 invalidate=1 00:11:11.926 rw=randwrite 00:11:11.926 time_based=1 00:11:11.926 runtime=1 00:11:11.926 ioengine=libaio 00:11:11.926 direct=1 00:11:11.926 bs=4096 00:11:11.926 iodepth=1 00:11:11.926 norandommap=0 00:11:11.926 numjobs=1 00:11:11.926 00:11:11.926 verify_dump=1 00:11:11.926 verify_backlog=512 00:11:11.926 verify_state_save=0 00:11:11.926 do_verify=1 00:11:11.926 verify=crc32c-intel 00:11:11.926 [job0] 00:11:11.926 filename=/dev/nvme0n1 00:11:11.926 [job1] 00:11:11.926 filename=/dev/nvme0n2 00:11:11.926 [job2] 00:11:11.926 filename=/dev/nvme0n3 00:11:11.926 [job3] 00:11:11.926 filename=/dev/nvme0n4 00:11:11.926 Could not set queue depth (nvme0n1) 00:11:11.926 Could not set queue depth (nvme0n2) 00:11:11.926 Could not set queue depth (nvme0n3) 00:11:11.926 Could not set queue depth (nvme0n4) 00:11:12.185 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:12.185 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:12.185 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:12.185 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:12.185 fio-3.35 00:11:12.185 Starting 4 threads 00:11:13.564 00:11:13.564 job0: (groupid=0, jobs=1): err= 0: pid=2884483: Mon Nov 18 11:39:39 2024 00:11:13.564 read: IOPS=21, BW=85.4KiB/s (87.4kB/s)(88.0KiB/1031msec) 00:11:13.564 slat (nsec): min=11446, max=36025, avg=27220.23, stdev=9407.53 00:11:13.564 clat (usec): min=10253, max=44010, avg=39709.74, stdev=6612.06 00:11:13.564 lat (usec): min=10268, max=44029, avg=39736.97, stdev=6614.46 00:11:13.564 clat percentiles (usec): 00:11:13.564 | 1.00th=[10290], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:11:13.564 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:13.564 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:13.564 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:11:13.564 | 99.99th=[43779] 00:11:13.564 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:11:13.564 slat (nsec): min=12450, max=53859, avg=23229.11, stdev=3857.08 00:11:13.564 clat (usec): min=212, max=437, avg=276.69, stdev=44.50 00:11:13.564 lat (usec): min=234, max=486, avg=299.92, stdev=46.14 00:11:13.564 clat percentiles (usec): 00:11:13.564 | 1.00th=[ 217], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 243], 00:11:13.564 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 269], 00:11:13.564 | 70.00th=[ 285], 80.00th=[ 322], 90.00th=[ 347], 95.00th=[ 351], 00:11:13.564 | 99.00th=[ 420], 99.50th=[ 433], 99.90th=[ 437], 99.95th=[ 437], 00:11:13.564 | 99.99th=[ 437] 00:11:13.564 bw ( KiB/s): min= 4104, max= 4104, per=34.80%, avg=4104.00, stdev= 0.00, samples=1 00:11:13.564 iops : min= 1026, max= 1026, avg=1026.00, stdev= 0.00, samples=1 00:11:13.564 lat (usec) : 250=26.59%, 500=69.29% 00:11:13.564 lat (msec) : 20=0.19%, 50=3.93% 00:11:13.564 cpu : usr=1.07%, sys=1.36%, ctx=535, majf=0, minf=1 00:11:13.564 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:13.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.564 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.564 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.564 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:13.564 job1: (groupid=0, jobs=1): err= 0: pid=2884500: Mon Nov 18 11:39:39 2024 00:11:13.564 read: IOPS=322, BW=1290KiB/s (1321kB/s)(1344KiB/1042msec) 00:11:13.564 slat (nsec): min=7859, max=72367, avg=21379.68, stdev=7974.05 00:11:13.564 clat (usec): min=255, max=42312, avg=2579.84, stdev=9195.69 00:11:13.564 lat (usec): min=270, max=42332, avg=2601.22, stdev=9196.13 00:11:13.564 clat percentiles (usec): 00:11:13.564 | 1.00th=[ 265], 5.00th=[ 269], 10.00th=[ 277], 20.00th=[ 285], 00:11:13.564 | 30.00th=[ 289], 40.00th=[ 306], 50.00th=[ 396], 60.00th=[ 424], 00:11:13.564 | 70.00th=[ 469], 80.00th=[ 529], 90.00th=[ 619], 95.00th=[40633], 00:11:13.564 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:11:13.564 | 99.99th=[42206] 00:11:13.564 write: IOPS=491, BW=1965KiB/s (2013kB/s)(2048KiB/1042msec); 0 zone resets 00:11:13.564 slat (nsec): min=11834, max=58264, avg=22756.88, stdev=4406.19 00:11:13.564 clat (usec): min=223, max=519, avg=292.95, stdev=24.19 00:11:13.564 lat (usec): min=244, max=566, avg=315.71, stdev=24.79 00:11:13.564 clat percentiles (usec): 00:11:13.564 | 1.00th=[ 258], 5.00th=[ 269], 10.00th=[ 273], 20.00th=[ 277], 00:11:13.564 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 293], 00:11:13.564 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 314], 95.00th=[ 326], 00:11:13.564 | 99.00th=[ 392], 99.50th=[ 449], 99.90th=[ 519], 99.95th=[ 519], 00:11:13.564 | 99.99th=[ 519] 00:11:13.564 bw ( KiB/s): min= 4096, max= 4096, per=34.73%, avg=4096.00, stdev= 0.00, samples=1 00:11:13.564 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:13.564 lat (usec) : 250=0.35%, 500=89.62%, 750=7.55%, 1000=0.35% 00:11:13.564 lat (msec) : 50=2.12% 00:11:13.564 cpu : usr=0.86%, sys=2.88%, ctx=850, majf=0, minf=1 00:11:13.564 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:13.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.564 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.564 issued rwts: total=336,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.564 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:13.564 job2: (groupid=0, jobs=1): err= 0: pid=2884557: Mon Nov 18 11:39:39 2024 00:11:13.564 read: IOPS=517, BW=2071KiB/s (2121kB/s)(2144KiB/1035msec) 00:11:13.564 slat (nsec): min=7862, max=37344, avg=18232.29, stdev=5435.78 00:11:13.564 clat (usec): min=250, max=41395, avg=1336.51, stdev=6260.81 00:11:13.564 lat (usec): min=258, max=41413, avg=1354.74, stdev=6262.13 00:11:13.564 clat percentiles (usec): 00:11:13.564 | 1.00th=[ 265], 5.00th=[ 277], 10.00th=[ 293], 20.00th=[ 310], 00:11:13.564 | 30.00th=[ 330], 40.00th=[ 347], 50.00th=[ 351], 60.00th=[ 359], 00:11:13.564 | 70.00th=[ 363], 80.00th=[ 371], 90.00th=[ 383], 95.00th=[ 490], 00:11:13.564 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:13.564 | 99.99th=[41157] 00:11:13.564 write: IOPS=989, BW=3957KiB/s (4052kB/s)(4096KiB/1035msec); 0 zone resets 00:11:13.564 slat (nsec): min=9770, max=67334, avg=21226.92, stdev=8384.27 00:11:13.564 clat (usec): min=190, max=756, avg=271.53, stdev=56.21 00:11:13.564 lat (usec): min=200, max=782, avg=292.76, stdev=61.45 00:11:13.564 clat percentiles (usec): 00:11:13.564 | 1.00th=[ 198], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 225], 00:11:13.565 | 30.00th=[ 237], 40.00th=[ 247], 50.00th=[ 258], 60.00th=[ 281], 00:11:13.565 | 70.00th=[ 297], 80.00th=[ 314], 90.00th=[ 338], 95.00th=[ 367], 00:11:13.565 | 99.00th=[ 441], 99.50th=[ 465], 99.90th=[ 594], 99.95th=[ 758], 00:11:13.565 | 99.99th=[ 758] 00:11:13.565 bw ( KiB/s): min= 2568, max= 5624, per=34.73%, avg=4096.00, stdev=2160.92, samples=2 00:11:13.565 iops : min= 642, max= 1406, avg=1024.00, stdev=540.23, samples=2 00:11:13.565 lat (usec) : 250=28.33%, 500=70.19%, 750=0.32%, 1000=0.19% 00:11:13.565 lat (msec) : 2=0.13%, 50=0.83% 00:11:13.565 cpu : usr=2.61%, sys=3.58%, ctx=1561, majf=0, minf=1 00:11:13.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:13.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.565 issued rwts: total=536,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:13.565 job3: (groupid=0, jobs=1): err= 0: pid=2884577: Mon Nov 18 11:39:39 2024 00:11:13.565 read: IOPS=510, BW=2041KiB/s (2090kB/s)(2096KiB/1027msec) 00:11:13.565 slat (nsec): min=8262, max=62957, avg=22999.36, stdev=7948.37 00:11:13.565 clat (usec): min=273, max=41958, avg=1366.34, stdev=6079.84 00:11:13.565 lat (usec): min=302, max=41968, avg=1389.34, stdev=6081.02 00:11:13.565 clat percentiles (usec): 00:11:13.565 | 1.00th=[ 293], 5.00th=[ 306], 10.00th=[ 334], 20.00th=[ 371], 00:11:13.565 | 30.00th=[ 396], 40.00th=[ 416], 50.00th=[ 433], 60.00th=[ 465], 00:11:13.565 | 70.00th=[ 486], 80.00th=[ 515], 90.00th=[ 545], 95.00th=[ 562], 00:11:13.565 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:11:13.565 | 99.99th=[42206] 00:11:13.565 write: IOPS=997, BW=3988KiB/s (4084kB/s)(4096KiB/1027msec); 0 zone resets 00:11:13.565 slat (nsec): min=9504, max=62809, avg=19880.85, stdev=6918.73 00:11:13.565 clat (usec): min=186, max=559, avg=262.42, stdev=38.65 00:11:13.565 lat (usec): min=195, max=609, avg=282.31, stdev=43.34 00:11:13.565 clat percentiles (usec): 00:11:13.565 | 1.00th=[ 194], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 227], 00:11:13.565 | 30.00th=[ 239], 40.00th=[ 249], 50.00th=[ 273], 60.00th=[ 281], 00:11:13.565 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 302], 95.00th=[ 310], 00:11:13.565 | 99.00th=[ 351], 99.50th=[ 392], 99.90th=[ 490], 99.95th=[ 562], 00:11:13.565 | 99.99th=[ 562] 00:11:13.565 bw ( KiB/s): min= 1160, max= 7032, per=34.73%, avg=4096.00, stdev=4152.13, samples=2 00:11:13.565 iops : min= 290, max= 1758, avg=1024.00, stdev=1038.03, samples=2 00:11:13.565 lat (usec) : 250=27.13%, 500=64.79%, 750=7.30% 00:11:13.565 lat (msec) : 50=0.78% 00:11:13.565 cpu : usr=2.05%, sys=4.29%, ctx=1550, majf=0, minf=1 00:11:13.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:13.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.565 issued rwts: total=524,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:13.565 00:11:13.565 Run status group 0 (all jobs): 00:11:13.565 READ: bw=5443KiB/s (5574kB/s), 85.4KiB/s-2071KiB/s (87.4kB/s-2121kB/s), io=5672KiB (5808kB), run=1027-1042msec 00:11:13.565 WRITE: bw=11.5MiB/s (12.1MB/s), 1965KiB/s-3988KiB/s (2013kB/s-4084kB/s), io=12.0MiB (12.6MB), run=1027-1042msec 00:11:13.565 00:11:13.565 Disk stats (read/write): 00:11:13.565 nvme0n1: ios=53/512, merge=0/0, ticks=1584/140, in_queue=1724, util=99.30% 00:11:13.565 nvme0n2: ios=353/512, merge=0/0, ticks=1081/144, in_queue=1225, util=96.81% 00:11:13.565 nvme0n3: ios=564/1024, merge=0/0, ticks=775/266, in_queue=1041, util=96.63% 00:11:13.565 nvme0n4: ios=570/1024, merge=0/0, ticks=929/257, in_queue=1186, util=97.46% 00:11:13.565 11:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:13.565 [global] 00:11:13.565 thread=1 00:11:13.565 invalidate=1 00:11:13.565 rw=write 00:11:13.565 time_based=1 00:11:13.565 runtime=1 00:11:13.565 ioengine=libaio 00:11:13.565 direct=1 00:11:13.565 bs=4096 00:11:13.565 iodepth=128 00:11:13.565 norandommap=0 00:11:13.565 numjobs=1 00:11:13.565 00:11:13.565 verify_dump=1 00:11:13.565 verify_backlog=512 00:11:13.565 verify_state_save=0 00:11:13.565 do_verify=1 00:11:13.565 verify=crc32c-intel 00:11:13.565 [job0] 00:11:13.565 filename=/dev/nvme0n1 00:11:13.565 [job1] 00:11:13.565 filename=/dev/nvme0n2 00:11:13.565 [job2] 00:11:13.565 filename=/dev/nvme0n3 00:11:13.565 [job3] 00:11:13.565 filename=/dev/nvme0n4 00:11:13.565 Could not set queue depth (nvme0n1) 00:11:13.565 Could not set queue depth (nvme0n2) 00:11:13.565 Could not set queue depth (nvme0n3) 00:11:13.565 Could not set queue depth (nvme0n4) 00:11:13.565 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:13.565 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:13.565 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:13.565 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:13.565 fio-3.35 00:11:13.565 Starting 4 threads 00:11:14.939 00:11:14.939 job0: (groupid=0, jobs=1): err= 0: pid=2884819: Mon Nov 18 11:39:40 2024 00:11:14.939 read: IOPS=2164, BW=8657KiB/s (8864kB/s)(8700KiB/1005msec) 00:11:14.939 slat (usec): min=3, max=13375, avg=213.45, stdev=1152.57 00:11:14.939 clat (usec): min=3952, max=47208, avg=25393.41, stdev=5853.25 00:11:14.939 lat (usec): min=7770, max=47225, avg=25606.86, stdev=5932.76 00:11:14.939 clat percentiles (usec): 00:11:14.939 | 1.00th=[ 7898], 5.00th=[16188], 10.00th=[19530], 20.00th=[21365], 00:11:14.939 | 30.00th=[23200], 40.00th=[24511], 50.00th=[25822], 60.00th=[26084], 00:11:14.939 | 70.00th=[27395], 80.00th=[28967], 90.00th=[32637], 95.00th=[35914], 00:11:14.939 | 99.00th=[42730], 99.50th=[45351], 99.90th=[46924], 99.95th=[46924], 00:11:14.939 | 99.99th=[47449] 00:11:14.939 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:11:14.939 slat (usec): min=4, max=20610, avg=200.69, stdev=1070.68 00:11:14.939 clat (usec): min=13549, max=51890, avg=27725.89, stdev=7544.98 00:11:14.939 lat (usec): min=13556, max=51922, avg=27926.58, stdev=7627.09 00:11:14.939 clat percentiles (usec): 00:11:14.939 | 1.00th=[13566], 5.00th=[15008], 10.00th=[15139], 20.00th=[17695], 00:11:14.939 | 30.00th=[27132], 40.00th=[27657], 50.00th=[28967], 60.00th=[30278], 00:11:14.939 | 70.00th=[31065], 80.00th=[31589], 90.00th=[36439], 95.00th=[36963], 00:11:14.939 | 99.00th=[47973], 99.50th=[50594], 99.90th=[51643], 99.95th=[51643], 00:11:14.939 | 99.99th=[51643] 00:11:14.939 bw ( KiB/s): min= 8568, max=11904, per=19.38%, avg=10236.00, stdev=2358.91, samples=2 00:11:14.939 iops : min= 2142, max= 2976, avg=2559.00, stdev=589.73, samples=2 00:11:14.939 lat (msec) : 4=0.02%, 10=0.76%, 20=17.44%, 50=81.39%, 100=0.38% 00:11:14.939 cpu : usr=3.19%, sys=4.58%, ctx=277, majf=0, minf=1 00:11:14.939 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:11:14.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:14.939 issued rwts: total=2175,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.939 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:14.939 job1: (groupid=0, jobs=1): err= 0: pid=2884820: Mon Nov 18 11:39:40 2024 00:11:14.939 read: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec) 00:11:14.939 slat (usec): min=3, max=19557, avg=233.51, stdev=1414.13 00:11:14.939 clat (usec): min=14283, max=51332, avg=28857.51, stdev=4880.48 00:11:14.939 lat (usec): min=14294, max=51344, avg=29091.01, stdev=5030.29 00:11:14.939 clat percentiles (usec): 00:11:14.939 | 1.00th=[17695], 5.00th=[20841], 10.00th=[23725], 20.00th=[25822], 00:11:14.939 | 30.00th=[26346], 40.00th=[27657], 50.00th=[27919], 60.00th=[28967], 00:11:14.939 | 70.00th=[30802], 80.00th=[32113], 90.00th=[35390], 95.00th=[37487], 00:11:14.939 | 99.00th=[42730], 99.50th=[46400], 99.90th=[48497], 99.95th=[48497], 00:11:14.939 | 99.99th=[51119] 00:11:14.939 write: IOPS=2203, BW=8813KiB/s (9024kB/s)(8892KiB/1009msec); 0 zone resets 00:11:14.939 slat (usec): min=4, max=14254, avg=225.49, stdev=977.54 00:11:14.939 clat (usec): min=6467, max=79149, avg=30769.27, stdev=8982.51 00:11:14.939 lat (usec): min=10618, max=79156, avg=30994.76, stdev=9026.61 00:11:14.939 clat percentiles (usec): 00:11:14.939 | 1.00th=[15533], 5.00th=[21103], 10.00th=[21890], 20.00th=[27132], 00:11:14.939 | 30.00th=[27395], 40.00th=[28967], 50.00th=[29492], 60.00th=[30540], 00:11:14.939 | 70.00th=[31065], 80.00th=[32113], 90.00th=[38011], 95.00th=[43779], 00:11:14.939 | 99.00th=[68682], 99.50th=[79168], 99.90th=[79168], 99.95th=[79168], 00:11:14.939 | 99.99th=[79168] 00:11:14.939 bw ( KiB/s): min= 8320, max= 8440, per=15.87%, avg=8380.00, stdev=84.85, samples=2 00:11:14.939 iops : min= 2080, max= 2110, avg=2095.00, stdev=21.21, samples=2 00:11:14.939 lat (msec) : 10=0.02%, 20=2.90%, 50=94.83%, 100=2.25% 00:11:14.940 cpu : usr=2.58%, sys=4.86%, ctx=266, majf=0, minf=1 00:11:14.940 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:11:14.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:14.940 issued rwts: total=2048,2223,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:14.940 job2: (groupid=0, jobs=1): err= 0: pid=2884821: Mon Nov 18 11:39:40 2024 00:11:14.940 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:11:14.940 slat (usec): min=2, max=14124, avg=81.41, stdev=712.50 00:11:14.940 clat (usec): min=5921, max=47186, avg=15419.61, stdev=4093.00 00:11:14.940 lat (usec): min=5926, max=47191, avg=15501.02, stdev=4142.24 00:11:14.940 clat percentiles (usec): 00:11:14.940 | 1.00th=[ 6718], 5.00th=[10814], 10.00th=[11207], 20.00th=[12387], 00:11:14.940 | 30.00th=[13960], 40.00th=[14484], 50.00th=[14877], 60.00th=[15270], 00:11:14.940 | 70.00th=[15664], 80.00th=[16909], 90.00th=[21627], 95.00th=[23462], 00:11:14.940 | 99.00th=[28181], 99.50th=[28967], 99.90th=[32113], 99.95th=[46924], 00:11:14.940 | 99.99th=[47449] 00:11:14.940 write: IOPS=4107, BW=16.0MiB/s (16.8MB/s)(16.1MiB/1004msec); 0 zone resets 00:11:14.940 slat (usec): min=3, max=13704, avg=79.85, stdev=511.89 00:11:14.940 clat (usec): min=3072, max=75958, avg=15659.90, stdev=10912.78 00:11:14.940 lat (usec): min=3078, max=75963, avg=15739.74, stdev=10934.72 00:11:14.940 clat percentiles (usec): 00:11:14.940 | 1.00th=[ 3851], 5.00th=[ 5604], 10.00th=[ 6718], 20.00th=[ 8979], 00:11:14.940 | 30.00th=[10421], 40.00th=[13960], 50.00th=[15401], 60.00th=[15926], 00:11:14.940 | 70.00th=[16319], 80.00th=[16581], 90.00th=[18744], 95.00th=[36439], 00:11:14.940 | 99.00th=[67634], 99.50th=[70779], 99.90th=[76022], 99.95th=[76022], 00:11:14.940 | 99.99th=[76022] 00:11:14.940 bw ( KiB/s): min=15824, max=16944, per=31.02%, avg=16384.00, stdev=791.96, samples=2 00:11:14.940 iops : min= 3956, max= 4236, avg=4096.00, stdev=197.99, samples=2 00:11:14.940 lat (msec) : 4=0.57%, 10=14.50%, 20=73.42%, 50=9.76%, 100=1.75% 00:11:14.940 cpu : usr=4.09%, sys=5.68%, ctx=423, majf=0, minf=1 00:11:14.940 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:14.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:14.940 issued rwts: total=4096,4124,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:14.940 job3: (groupid=0, jobs=1): err= 0: pid=2884822: Mon Nov 18 11:39:40 2024 00:11:14.940 read: IOPS=4035, BW=15.8MiB/s (16.5MB/s)(16.0MiB/1015msec) 00:11:14.940 slat (usec): min=3, max=13858, avg=127.03, stdev=894.78 00:11:14.940 clat (usec): min=5262, max=28887, avg=15794.77, stdev=4091.10 00:11:14.940 lat (usec): min=5271, max=28914, avg=15921.80, stdev=4142.87 00:11:14.940 clat percentiles (usec): 00:11:14.940 | 1.00th=[ 6194], 5.00th=[10814], 10.00th=[11994], 20.00th=[13698], 00:11:14.940 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14484], 60.00th=[14746], 00:11:14.940 | 70.00th=[15926], 80.00th=[19006], 90.00th=[21890], 95.00th=[25035], 00:11:14.940 | 99.00th=[27657], 99.50th=[28181], 99.90th=[28967], 99.95th=[28967], 00:11:14.940 | 99.99th=[28967] 00:11:14.940 write: IOPS=4427, BW=17.3MiB/s (18.1MB/s)(17.6MiB/1015msec); 0 zone resets 00:11:14.940 slat (usec): min=4, max=12037, avg=97.55, stdev=540.62 00:11:14.940 clat (usec): min=3359, max=28882, avg=14243.68, stdev=3106.44 00:11:14.940 lat (usec): min=3368, max=28896, avg=14341.23, stdev=3160.39 00:11:14.940 clat percentiles (usec): 00:11:14.940 | 1.00th=[ 4752], 5.00th=[ 7111], 10.00th=[ 9765], 20.00th=[13304], 00:11:14.940 | 30.00th=[14353], 40.00th=[14746], 50.00th=[15139], 60.00th=[15401], 00:11:14.940 | 70.00th=[15533], 80.00th=[15664], 90.00th=[15795], 95.00th=[16057], 00:11:14.940 | 99.00th=[25560], 99.50th=[27395], 99.90th=[28443], 99.95th=[28443], 00:11:14.940 | 99.99th=[28967] 00:11:14.940 bw ( KiB/s): min=17328, max=17608, per=33.08%, avg=17468.00, stdev=197.99, samples=2 00:11:14.940 iops : min= 4332, max= 4402, avg=4367.00, stdev=49.50, samples=2 00:11:14.940 lat (msec) : 4=0.23%, 10=7.08%, 20=84.07%, 50=8.61% 00:11:14.940 cpu : usr=4.64%, sys=9.37%, ctx=495, majf=0, minf=1 00:11:14.940 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:14.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:14.940 issued rwts: total=4096,4494,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:14.940 00:11:14.940 Run status group 0 (all jobs): 00:11:14.940 READ: bw=47.8MiB/s (50.1MB/s), 8119KiB/s-15.9MiB/s (8314kB/s-16.7MB/s), io=48.5MiB (50.9MB), run=1004-1015msec 00:11:14.940 WRITE: bw=51.6MiB/s (54.1MB/s), 8813KiB/s-17.3MiB/s (9024kB/s-18.1MB/s), io=52.3MiB (54.9MB), run=1004-1015msec 00:11:14.940 00:11:14.940 Disk stats (read/write): 00:11:14.940 nvme0n1: ios=1693/2048, merge=0/0, ticks=22509/29653, in_queue=52162, util=85.97% 00:11:14.940 nvme0n2: ios=1676/2048, merge=0/0, ticks=23689/28249, in_queue=51938, util=87.01% 00:11:14.940 nvme0n3: ios=3342/3578, merge=0/0, ticks=49761/55041, in_queue=104802, util=89.05% 00:11:14.940 nvme0n4: ios=3605/3631, merge=0/0, ticks=54759/49550, in_queue=104309, util=91.39% 00:11:14.940 11:39:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:14.940 [global] 00:11:14.940 thread=1 00:11:14.940 invalidate=1 00:11:14.940 rw=randwrite 00:11:14.940 time_based=1 00:11:14.940 runtime=1 00:11:14.940 ioengine=libaio 00:11:14.940 direct=1 00:11:14.940 bs=4096 00:11:14.940 iodepth=128 00:11:14.940 norandommap=0 00:11:14.940 numjobs=1 00:11:14.940 00:11:14.940 verify_dump=1 00:11:14.940 verify_backlog=512 00:11:14.940 verify_state_save=0 00:11:14.940 do_verify=1 00:11:14.940 verify=crc32c-intel 00:11:14.940 [job0] 00:11:14.940 filename=/dev/nvme0n1 00:11:14.940 [job1] 00:11:14.940 filename=/dev/nvme0n2 00:11:14.940 [job2] 00:11:14.940 filename=/dev/nvme0n3 00:11:14.940 [job3] 00:11:14.940 filename=/dev/nvme0n4 00:11:14.940 Could not set queue depth (nvme0n1) 00:11:14.940 Could not set queue depth (nvme0n2) 00:11:14.940 Could not set queue depth (nvme0n3) 00:11:14.940 Could not set queue depth (nvme0n4) 00:11:14.940 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:14.940 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:14.940 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:14.940 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:14.940 fio-3.35 00:11:14.940 Starting 4 threads 00:11:16.318 00:11:16.318 job0: (groupid=0, jobs=1): err= 0: pid=2885056: Mon Nov 18 11:39:42 2024 00:11:16.318 read: IOPS=3032, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1013msec) 00:11:16.318 slat (usec): min=2, max=13355, avg=144.20, stdev=908.88 00:11:16.318 clat (usec): min=9359, max=34637, avg=18578.04, stdev=3153.88 00:11:16.318 lat (usec): min=9365, max=34652, avg=18722.24, stdev=3240.20 00:11:16.318 clat percentiles (usec): 00:11:16.318 | 1.00th=[11469], 5.00th=[13829], 10.00th=[14222], 20.00th=[15533], 00:11:16.318 | 30.00th=[16909], 40.00th=[17957], 50.00th=[18744], 60.00th=[19530], 00:11:16.318 | 70.00th=[20317], 80.00th=[20841], 90.00th=[22152], 95.00th=[23462], 00:11:16.318 | 99.00th=[26084], 99.50th=[26346], 99.90th=[30278], 99.95th=[32637], 00:11:16.318 | 99.99th=[34866] 00:11:16.318 write: IOPS=3488, BW=13.6MiB/s (14.3MB/s)(13.8MiB/1013msec); 0 zone resets 00:11:16.318 slat (usec): min=3, max=11762, avg=150.55, stdev=879.65 00:11:16.318 clat (usec): min=4866, max=63638, avg=20085.45, stdev=9259.54 00:11:16.318 lat (usec): min=4873, max=63646, avg=20236.00, stdev=9341.34 00:11:16.318 clat percentiles (usec): 00:11:16.318 | 1.00th=[ 6128], 5.00th=[12125], 10.00th=[13566], 20.00th=[13829], 00:11:16.318 | 30.00th=[14877], 40.00th=[15926], 50.00th=[16450], 60.00th=[17433], 00:11:16.318 | 70.00th=[21365], 80.00th=[25560], 90.00th=[31589], 95.00th=[42730], 00:11:16.318 | 99.00th=[51643], 99.50th=[56886], 99.90th=[63701], 99.95th=[63701], 00:11:16.318 | 99.99th=[63701] 00:11:16.318 bw ( KiB/s): min=11608, max=15640, per=25.08%, avg=13624.00, stdev=2851.05, samples=2 00:11:16.318 iops : min= 2902, max= 3910, avg=3406.00, stdev=712.76, samples=2 00:11:16.318 lat (msec) : 10=1.41%, 20=64.96%, 50=32.83%, 100=0.80% 00:11:16.318 cpu : usr=2.67%, sys=5.24%, ctx=241, majf=0, minf=1 00:11:16.318 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:11:16.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.318 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:16.318 issued rwts: total=3072,3534,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.318 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:16.318 job1: (groupid=0, jobs=1): err= 0: pid=2885057: Mon Nov 18 11:39:42 2024 00:11:16.318 read: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec) 00:11:16.318 slat (usec): min=3, max=15719, avg=256.35, stdev=1428.44 00:11:16.318 clat (usec): min=8445, max=60901, avg=31168.87, stdev=12914.13 00:11:16.318 lat (usec): min=8451, max=60941, avg=31425.22, stdev=13048.17 00:11:16.318 clat percentiles (usec): 00:11:16.318 | 1.00th=[10028], 5.00th=[13960], 10.00th=[18220], 20.00th=[20317], 00:11:16.318 | 30.00th=[21627], 40.00th=[22414], 50.00th=[26084], 60.00th=[34341], 00:11:16.318 | 70.00th=[38011], 80.00th=[48497], 90.00th=[50070], 95.00th=[51119], 00:11:16.318 | 99.00th=[57410], 99.50th=[58459], 99.90th=[59507], 99.95th=[60556], 00:11:16.318 | 99.99th=[61080] 00:11:16.318 write: IOPS=2275, BW=9104KiB/s (9322kB/s)(9140KiB/1004msec); 0 zone resets 00:11:16.318 slat (usec): min=4, max=18709, avg=198.24, stdev=1121.65 00:11:16.318 clat (usec): min=2248, max=64679, avg=27512.70, stdev=12271.68 00:11:16.318 lat (usec): min=8154, max=64697, avg=27710.94, stdev=12358.43 00:11:16.318 clat percentiles (usec): 00:11:16.318 | 1.00th=[ 8291], 5.00th=[12518], 10.00th=[15139], 20.00th=[15926], 00:11:16.318 | 30.00th=[17171], 40.00th=[22676], 50.00th=[25560], 60.00th=[30540], 00:11:16.319 | 70.00th=[31327], 80.00th=[37487], 90.00th=[44827], 95.00th=[52691], 00:11:16.319 | 99.00th=[59507], 99.50th=[62129], 99.90th=[64750], 99.95th=[64750], 00:11:16.319 | 99.99th=[64750] 00:11:16.319 bw ( KiB/s): min= 8208, max= 9064, per=15.90%, avg=8636.00, stdev=605.28, samples=2 00:11:16.319 iops : min= 2052, max= 2266, avg=2159.00, stdev=151.32, samples=2 00:11:16.319 lat (msec) : 4=0.02%, 10=1.64%, 20=26.61%, 50=63.49%, 100=8.24% 00:11:16.319 cpu : usr=2.09%, sys=5.58%, ctx=203, majf=0, minf=2 00:11:16.319 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:11:16.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.319 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:16.319 issued rwts: total=2048,2285,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.319 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:16.319 job2: (groupid=0, jobs=1): err= 0: pid=2885060: Mon Nov 18 11:39:42 2024 00:11:16.319 read: IOPS=3553, BW=13.9MiB/s (14.6MB/s)(14.5MiB/1044msec) 00:11:16.319 slat (usec): min=3, max=5498, avg=128.50, stdev=676.76 00:11:16.319 clat (usec): min=11785, max=61522, avg=17384.38, stdev=6735.71 00:11:16.319 lat (usec): min=11835, max=61527, avg=17512.87, stdev=6758.30 00:11:16.319 clat percentiles (usec): 00:11:16.319 | 1.00th=[12518], 5.00th=[13698], 10.00th=[14222], 20.00th=[15401], 00:11:16.319 | 30.00th=[15795], 40.00th=[15926], 50.00th=[16188], 60.00th=[16319], 00:11:16.319 | 70.00th=[16712], 80.00th=[17171], 90.00th=[18482], 95.00th=[20317], 00:11:16.319 | 99.00th=[56361], 99.50th=[58459], 99.90th=[60556], 99.95th=[60556], 00:11:16.319 | 99.99th=[61604] 00:11:16.319 write: IOPS=3923, BW=15.3MiB/s (16.1MB/s)(16.0MiB/1044msec); 0 zone resets 00:11:16.319 slat (usec): min=4, max=5468, avg=117.28, stdev=488.39 00:11:16.319 clat (usec): min=11597, max=61530, avg=16436.71, stdev=1723.55 00:11:16.319 lat (usec): min=11615, max=61536, avg=16553.99, stdev=1724.96 00:11:16.319 clat percentiles (usec): 00:11:16.319 | 1.00th=[12256], 5.00th=[13960], 10.00th=[14353], 20.00th=[15270], 00:11:16.319 | 30.00th=[15926], 40.00th=[16319], 50.00th=[16450], 60.00th=[16712], 00:11:16.319 | 70.00th=[16909], 80.00th=[17433], 90.00th=[18220], 95.00th=[19268], 00:11:16.319 | 99.00th=[20579], 99.50th=[20579], 99.90th=[22152], 99.95th=[22676], 00:11:16.319 | 99.99th=[61604] 00:11:16.319 bw ( KiB/s): min=16368, max=16384, per=30.14%, avg=16376.00, stdev=11.31, samples=2 00:11:16.319 iops : min= 4092, max= 4096, avg=4094.00, stdev= 2.83, samples=2 00:11:16.319 lat (msec) : 20=95.77%, 50=3.15%, 100=1.08% 00:11:16.319 cpu : usr=4.89%, sys=9.59%, ctx=503, majf=0, minf=1 00:11:16.319 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:16.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.319 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:16.319 issued rwts: total=3710,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.319 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:16.319 job3: (groupid=0, jobs=1): err= 0: pid=2885061: Mon Nov 18 11:39:42 2024 00:11:16.319 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:11:16.319 slat (usec): min=2, max=7218, avg=118.88, stdev=704.56 00:11:16.319 clat (usec): min=9410, max=22839, avg=15037.87, stdev=1686.23 00:11:16.319 lat (usec): min=9421, max=23051, avg=15156.75, stdev=1787.18 00:11:16.319 clat percentiles (usec): 00:11:16.319 | 1.00th=[10421], 5.00th=[11994], 10.00th=[13173], 20.00th=[14353], 00:11:16.319 | 30.00th=[14484], 40.00th=[14746], 50.00th=[14877], 60.00th=[15008], 00:11:16.319 | 70.00th=[15401], 80.00th=[15926], 90.00th=[17171], 95.00th=[17957], 00:11:16.319 | 99.00th=[20579], 99.50th=[21365], 99.90th=[22414], 99.95th=[22938], 00:11:16.319 | 99.99th=[22938] 00:11:16.319 write: IOPS=4256, BW=16.6MiB/s (17.4MB/s)(16.7MiB/1002msec); 0 zone resets 00:11:16.319 slat (usec): min=3, max=7767, avg=113.18, stdev=642.48 00:11:16.319 clat (usec): min=242, max=27373, avg=15257.58, stdev=2185.66 00:11:16.319 lat (usec): min=4716, max=27379, avg=15370.76, stdev=2221.17 00:11:16.319 clat percentiles (usec): 00:11:16.319 | 1.00th=[ 5145], 5.00th=[10945], 10.00th=[13698], 20.00th=[14484], 00:11:16.319 | 30.00th=[14877], 40.00th=[15139], 50.00th=[15401], 60.00th=[15664], 00:11:16.319 | 70.00th=[15926], 80.00th=[16188], 90.00th=[16712], 95.00th=[18482], 00:11:16.319 | 99.00th=[21103], 99.50th=[22152], 99.90th=[25822], 99.95th=[25822], 00:11:16.319 | 99.99th=[27395] 00:11:16.319 bw ( KiB/s): min=16432, max=16664, per=30.46%, avg=16548.00, stdev=164.05, samples=2 00:11:16.319 iops : min= 4108, max= 4166, avg=4137.00, stdev=41.01, samples=2 00:11:16.319 lat (usec) : 250=0.01% 00:11:16.319 lat (msec) : 10=1.93%, 20=96.32%, 50=1.75% 00:11:16.319 cpu : usr=4.40%, sys=5.39%, ctx=404, majf=0, minf=1 00:11:16.319 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:16.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.319 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:16.319 issued rwts: total=4096,4265,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.319 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:16.319 00:11:16.319 Run status group 0 (all jobs): 00:11:16.319 READ: bw=48.4MiB/s (50.7MB/s), 8159KiB/s-16.0MiB/s (8355kB/s-16.7MB/s), io=50.5MiB (52.9MB), run=1002-1044msec 00:11:16.319 WRITE: bw=53.1MiB/s (55.6MB/s), 9104KiB/s-16.6MiB/s (9322kB/s-17.4MB/s), io=55.4MiB (58.1MB), run=1002-1044msec 00:11:16.319 00:11:16.319 Disk stats (read/write): 00:11:16.319 nvme0n1: ios=2875/3072, merge=0/0, ticks=29031/28568, in_queue=57599, util=97.39% 00:11:16.319 nvme0n2: ios=1585/1926, merge=0/0, ticks=19544/17316, in_queue=36860, util=90.77% 00:11:16.319 nvme0n3: ios=3129/3498, merge=0/0, ticks=16203/17947, in_queue=34150, util=91.46% 00:11:16.319 nvme0n4: ios=3591/3584, merge=0/0, ticks=24623/25212, in_queue=49835, util=95.91% 00:11:16.319 11:39:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:16.319 11:39:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2885198 00:11:16.319 11:39:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:16.319 11:39:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:16.319 [global] 00:11:16.319 thread=1 00:11:16.319 invalidate=1 00:11:16.319 rw=read 00:11:16.319 time_based=1 00:11:16.319 runtime=10 00:11:16.319 ioengine=libaio 00:11:16.319 direct=1 00:11:16.319 bs=4096 00:11:16.319 iodepth=1 00:11:16.319 norandommap=1 00:11:16.319 numjobs=1 00:11:16.319 00:11:16.319 [job0] 00:11:16.319 filename=/dev/nvme0n1 00:11:16.319 [job1] 00:11:16.319 filename=/dev/nvme0n2 00:11:16.319 [job2] 00:11:16.319 filename=/dev/nvme0n3 00:11:16.319 [job3] 00:11:16.319 filename=/dev/nvme0n4 00:11:16.319 Could not set queue depth (nvme0n1) 00:11:16.319 Could not set queue depth (nvme0n2) 00:11:16.319 Could not set queue depth (nvme0n3) 00:11:16.319 Could not set queue depth (nvme0n4) 00:11:16.578 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:16.578 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:16.578 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:16.578 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:16.578 fio-3.35 00:11:16.578 Starting 4 threads 00:11:19.869 11:39:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:19.869 11:39:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:19.869 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=4100096, buflen=4096 00:11:19.869 fio: pid=2885289, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:19.869 11:39:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:19.869 11:39:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:19.869 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=962560, buflen=4096 00:11:19.869 fio: pid=2885288, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:20.437 11:39:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:20.437 11:39:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:20.437 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=3481600, buflen=4096 00:11:20.437 fio: pid=2885286, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:20.437 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=12730368, buflen=4096 00:11:20.437 fio: pid=2885287, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:11:20.695 00:11:20.695 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2885286: Mon Nov 18 11:39:46 2024 00:11:20.695 read: IOPS=238, BW=952KiB/s (975kB/s)(3400KiB/3570msec) 00:11:20.695 slat (usec): min=5, max=2834, avg=14.77, stdev=97.00 00:11:20.695 clat (usec): min=208, max=86889, avg=4154.70, stdev=12160.58 00:11:20.695 lat (usec): min=214, max=89724, avg=4169.47, stdev=12186.95 00:11:20.695 clat percentiles (usec): 00:11:20.695 | 1.00th=[ 219], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 241], 00:11:20.695 | 30.00th=[ 249], 40.00th=[ 265], 50.00th=[ 281], 60.00th=[ 285], 00:11:20.695 | 70.00th=[ 302], 80.00th=[ 404], 90.00th=[ 603], 95.00th=[41157], 00:11:20.695 | 99.00th=[42206], 99.50th=[42206], 99.90th=[86508], 99.95th=[86508], 00:11:20.695 | 99.99th=[86508] 00:11:20.696 bw ( KiB/s): min= 88, max= 4064, per=20.60%, avg=1114.67, stdev=1672.19, samples=6 00:11:20.696 iops : min= 22, max= 1016, avg=278.67, stdev=418.05, samples=6 00:11:20.696 lat (usec) : 250=30.55%, 500=55.58%, 750=4.35% 00:11:20.696 lat (msec) : 2=0.12%, 50=9.17%, 100=0.12% 00:11:20.696 cpu : usr=0.00%, sys=0.42%, ctx=852, majf=0, minf=1 00:11:20.696 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:20.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.696 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.696 issued rwts: total=851,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.696 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:20.696 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=2885287: Mon Nov 18 11:39:46 2024 00:11:20.696 read: IOPS=809, BW=3236KiB/s (3313kB/s)(12.1MiB/3842msec) 00:11:20.696 slat (usec): min=4, max=19850, avg=26.07, stdev=397.84 00:11:20.696 clat (usec): min=207, max=42019, avg=1206.38, stdev=6035.62 00:11:20.696 lat (usec): min=216, max=61013, avg=1230.16, stdev=6104.82 00:11:20.696 clat percentiles (usec): 00:11:20.696 | 1.00th=[ 221], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 237], 00:11:20.696 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 258], 60.00th=[ 285], 00:11:20.696 | 70.00th=[ 334], 80.00th=[ 383], 90.00th=[ 424], 95.00th=[ 490], 00:11:20.696 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:11:20.696 | 99.99th=[42206] 00:11:20.696 bw ( KiB/s): min= 96, max=11984, per=64.96%, avg=3513.57, stdev=4502.63, samples=7 00:11:20.696 iops : min= 24, max= 2996, avg=878.29, stdev=1125.72, samples=7 00:11:20.696 lat (usec) : 250=41.43%, 500=54.00%, 750=2.28%, 1000=0.03% 00:11:20.696 lat (msec) : 50=2.22% 00:11:20.696 cpu : usr=0.70%, sys=1.43%, ctx=3110, majf=0, minf=2 00:11:20.696 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:20.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.696 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.696 issued rwts: total=3109,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.696 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:20.696 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2885288: Mon Nov 18 11:39:46 2024 00:11:20.696 read: IOPS=72, BW=290KiB/s (297kB/s)(940KiB/3242msec) 00:11:20.696 slat (usec): min=8, max=8928, avg=61.66, stdev=579.72 00:11:20.696 clat (usec): min=292, max=42534, avg=13628.16, stdev=19139.04 00:11:20.696 lat (usec): min=312, max=49992, avg=13690.00, stdev=19198.17 00:11:20.696 clat percentiles (usec): 00:11:20.696 | 1.00th=[ 293], 5.00th=[ 306], 10.00th=[ 318], 20.00th=[ 355], 00:11:20.696 | 30.00th=[ 404], 40.00th=[ 449], 50.00th=[ 490], 60.00th=[ 545], 00:11:20.696 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:11:20.696 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:11:20.696 | 99.99th=[42730] 00:11:20.696 bw ( KiB/s): min= 168, max= 568, per=5.64%, avg=305.33, stdev=138.42, samples=6 00:11:20.696 iops : min= 42, max= 142, avg=76.33, stdev=34.60, samples=6 00:11:20.696 lat (usec) : 500=51.27%, 750=15.68%, 1000=0.42% 00:11:20.696 lat (msec) : 50=32.20% 00:11:20.696 cpu : usr=0.19%, sys=0.19%, ctx=237, majf=0, minf=1 00:11:20.696 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:20.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.696 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.696 issued rwts: total=236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.696 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:20.696 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2885289: Mon Nov 18 11:39:46 2024 00:11:20.696 read: IOPS=337, BW=1349KiB/s (1381kB/s)(4004KiB/2968msec) 00:11:20.696 slat (nsec): min=4605, max=61908, avg=18813.06, stdev=12190.52 00:11:20.696 clat (usec): min=207, max=42449, avg=2918.34, stdev=9920.46 00:11:20.696 lat (usec): min=221, max=42466, avg=2937.16, stdev=9920.87 00:11:20.696 clat percentiles (usec): 00:11:20.696 | 1.00th=[ 225], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 247], 00:11:20.696 | 30.00th=[ 255], 40.00th=[ 269], 50.00th=[ 355], 60.00th=[ 404], 00:11:20.696 | 70.00th=[ 429], 80.00th=[ 474], 90.00th=[ 529], 95.00th=[40633], 00:11:20.696 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:20.696 | 99.99th=[42206] 00:11:20.696 bw ( KiB/s): min= 264, max= 6584, per=28.87%, avg=1561.60, stdev=2807.89, samples=5 00:11:20.696 iops : min= 66, max= 1646, avg=390.40, stdev=701.97, samples=5 00:11:20.696 lat (usec) : 250=23.45%, 500=62.18%, 750=7.88% 00:11:20.696 lat (msec) : 4=0.10%, 50=6.29% 00:11:20.696 cpu : usr=0.13%, sys=0.94%, ctx=1002, majf=0, minf=2 00:11:20.696 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:20.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.696 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.696 issued rwts: total=1002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.696 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:20.696 00:11:20.696 Run status group 0 (all jobs): 00:11:20.696 READ: bw=5408KiB/s (5537kB/s), 290KiB/s-3236KiB/s (297kB/s-3313kB/s), io=20.3MiB (21.3MB), run=2968-3842msec 00:11:20.696 00:11:20.696 Disk stats (read/write): 00:11:20.696 nvme0n1: ios=846/0, merge=0/0, ticks=3324/0, in_queue=3324, util=96.08% 00:11:20.696 nvme0n2: ios=3102/0, merge=0/0, ticks=3472/0, in_queue=3472, util=95.98% 00:11:20.696 nvme0n3: ios=232/0, merge=0/0, ticks=3083/0, in_queue=3083, util=96.57% 00:11:20.696 nvme0n4: ios=996/0, merge=0/0, ticks=2775/0, in_queue=2775, util=96.75% 00:11:20.696 11:39:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:20.696 11:39:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:20.954 11:39:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:20.954 11:39:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:21.212 11:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:21.212 11:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:21.470 11:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:21.470 11:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:22.037 11:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:22.037 11:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:22.298 11:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:22.298 11:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2885198 00:11:22.298 11:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:22.298 11:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:23.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.231 11:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:23.231 11:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:23.231 11:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:23.231 11:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:23.231 11:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:23.232 11:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:23.232 11:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:23.232 11:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:23.232 11:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:23.232 nvmf hotplug test: fio failed as expected 00:11:23.232 11:39:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:23.232 11:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:23.232 11:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:23.232 11:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:23.232 11:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:23.232 11:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:23.232 11:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:23.232 11:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:23.232 11:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:23.232 11:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:23.232 11:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:23.232 11:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:23.232 rmmod nvme_tcp 00:11:23.232 rmmod nvme_fabrics 00:11:23.491 rmmod nvme_keyring 00:11:23.491 11:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:23.491 11:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:23.491 11:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:23.491 11:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2883033 ']' 00:11:23.491 11:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2883033 00:11:23.491 11:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2883033 ']' 00:11:23.491 11:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2883033 00:11:23.491 11:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:23.491 11:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:23.491 11:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2883033 00:11:23.491 11:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:23.491 11:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:23.491 11:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2883033' 00:11:23.491 killing process with pid 2883033 00:11:23.491 11:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2883033 00:11:23.491 11:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2883033 00:11:24.428 11:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:24.428 11:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:24.428 11:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:24.428 11:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:24.428 11:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:24.428 11:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:24.428 11:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:24.428 11:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:24.428 11:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:24.428 11:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.428 11:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.428 11:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.968 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:26.968 00:11:26.968 real 0m27.175s 00:11:26.968 user 1m35.556s 00:11:26.968 sys 0m6.673s 00:11:26.968 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.968 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.968 ************************************ 00:11:26.968 END TEST nvmf_fio_target 00:11:26.969 ************************************ 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:26.969 ************************************ 00:11:26.969 START TEST nvmf_bdevio 00:11:26.969 ************************************ 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:26.969 * Looking for test storage... 00:11:26.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:26.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.969 --rc genhtml_branch_coverage=1 00:11:26.969 --rc genhtml_function_coverage=1 00:11:26.969 --rc genhtml_legend=1 00:11:26.969 --rc geninfo_all_blocks=1 00:11:26.969 --rc geninfo_unexecuted_blocks=1 00:11:26.969 00:11:26.969 ' 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:26.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.969 --rc genhtml_branch_coverage=1 00:11:26.969 --rc genhtml_function_coverage=1 00:11:26.969 --rc genhtml_legend=1 00:11:26.969 --rc geninfo_all_blocks=1 00:11:26.969 --rc geninfo_unexecuted_blocks=1 00:11:26.969 00:11:26.969 ' 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:26.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.969 --rc genhtml_branch_coverage=1 00:11:26.969 --rc genhtml_function_coverage=1 00:11:26.969 --rc genhtml_legend=1 00:11:26.969 --rc geninfo_all_blocks=1 00:11:26.969 --rc geninfo_unexecuted_blocks=1 00:11:26.969 00:11:26.969 ' 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:26.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.969 --rc genhtml_branch_coverage=1 00:11:26.969 --rc genhtml_function_coverage=1 00:11:26.969 --rc genhtml_legend=1 00:11:26.969 --rc geninfo_all_blocks=1 00:11:26.969 --rc geninfo_unexecuted_blocks=1 00:11:26.969 00:11:26.969 ' 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.969 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:26.970 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.970 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:26.970 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:26.970 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:26.970 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:26.970 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:26.970 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:26.970 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:26.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:26.970 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:26.970 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:26.970 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:26.970 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:26.970 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:26.970 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:26.970 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:26.970 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:26.970 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:26.970 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:26.970 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:26.970 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.970 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.970 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.970 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:26.970 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:26.970 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:26.970 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:28.921 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:28.921 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:28.921 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:28.921 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:28.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:28.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:11:28.921 00:11:28.921 --- 10.0.0.2 ping statistics --- 00:11:28.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.921 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:28.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:28.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:11:28.921 00:11:28.921 --- 10.0.0.1 ping statistics --- 00:11:28.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.921 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:28.921 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:28.922 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:28.922 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:28.922 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:28.922 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:28.922 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:28.922 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:28.922 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:28.922 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:28.922 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:28.922 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:28.922 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.922 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2888188 00:11:28.922 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:28.922 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2888188 00:11:28.922 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2888188 ']' 00:11:28.922 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.922 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:28.922 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.922 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:28.922 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.922 [2024-11-18 11:39:54.720976] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:28.922 [2024-11-18 11:39:54.721133] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:29.180 [2024-11-18 11:39:54.874840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:29.180 [2024-11-18 11:39:55.015067] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:29.180 [2024-11-18 11:39:55.015156] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:29.180 [2024-11-18 11:39:55.015182] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:29.180 [2024-11-18 11:39:55.015207] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:29.180 [2024-11-18 11:39:55.015226] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:29.180 [2024-11-18 11:39:55.018264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:29.180 [2024-11-18 11:39:55.018323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:29.180 [2024-11-18 11:39:55.018376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:29.180 [2024-11-18 11:39:55.018383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:30.115 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:30.115 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:30.115 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:30.115 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:30.115 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:30.115 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:30.115 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:30.115 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.115 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:30.115 [2024-11-18 11:39:55.725429] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:30.115 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.115 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:30.115 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.115 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:30.115 Malloc0 00:11:30.115 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.115 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:30.115 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.115 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:30.115 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.115 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:30.115 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.115 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:30.115 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.115 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:30.115 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.115 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:30.115 [2024-11-18 11:39:55.852170] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:30.115 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.115 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:30.115 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:30.115 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:30.115 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:30.115 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:30.115 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:30.115 { 00:11:30.115 "params": { 00:11:30.115 "name": "Nvme$subsystem", 00:11:30.115 "trtype": "$TEST_TRANSPORT", 00:11:30.115 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:30.115 "adrfam": "ipv4", 00:11:30.115 "trsvcid": "$NVMF_PORT", 00:11:30.115 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:30.115 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:30.115 "hdgst": ${hdgst:-false}, 00:11:30.115 "ddgst": ${ddgst:-false} 00:11:30.115 }, 00:11:30.115 "method": "bdev_nvme_attach_controller" 00:11:30.115 } 00:11:30.115 EOF 00:11:30.115 )") 00:11:30.115 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:30.116 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:30.116 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:30.116 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:30.116 "params": { 00:11:30.116 "name": "Nvme1", 00:11:30.116 "trtype": "tcp", 00:11:30.116 "traddr": "10.0.0.2", 00:11:30.116 "adrfam": "ipv4", 00:11:30.116 "trsvcid": "4420", 00:11:30.116 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:30.116 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:30.116 "hdgst": false, 00:11:30.116 "ddgst": false 00:11:30.116 }, 00:11:30.116 "method": "bdev_nvme_attach_controller" 00:11:30.116 }' 00:11:30.116 [2024-11-18 11:39:55.938095] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:30.116 [2024-11-18 11:39:55.938229] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2888353 ] 00:11:30.374 [2024-11-18 11:39:56.075528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:30.374 [2024-11-18 11:39:56.210613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.374 [2024-11-18 11:39:56.210664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:30.374 [2024-11-18 11:39:56.210659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.940 I/O targets: 00:11:30.940 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:30.940 00:11:30.940 00:11:30.940 CUnit - A unit testing framework for C - Version 2.1-3 00:11:30.940 http://cunit.sourceforge.net/ 00:11:30.940 00:11:30.940 00:11:30.940 Suite: bdevio tests on: Nvme1n1 00:11:30.940 Test: blockdev write read block ...passed 00:11:31.199 Test: blockdev write zeroes read block ...passed 00:11:31.199 Test: blockdev write zeroes read no split ...passed 00:11:31.199 Test: blockdev write zeroes read split ...passed 00:11:31.199 Test: blockdev write zeroes read split partial ...passed 00:11:31.199 Test: blockdev reset ...[2024-11-18 11:39:56.961239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:31.199 [2024-11-18 11:39:56.961424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:11:31.199 [2024-11-18 11:39:56.976277] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:31.199 passed 00:11:31.199 Test: blockdev write read 8 blocks ...passed 00:11:31.199 Test: blockdev write read size > 128k ...passed 00:11:31.199 Test: blockdev write read invalid size ...passed 00:11:31.199 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:31.199 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:31.199 Test: blockdev write read max offset ...passed 00:11:31.458 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:31.458 Test: blockdev writev readv 8 blocks ...passed 00:11:31.458 Test: blockdev writev readv 30 x 1block ...passed 00:11:31.458 Test: blockdev writev readv block ...passed 00:11:31.458 Test: blockdev writev readv size > 128k ...passed 00:11:31.458 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:31.458 Test: blockdev comparev and writev ...[2024-11-18 11:39:57.196129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:31.458 [2024-11-18 11:39:57.196210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:31.458 [2024-11-18 11:39:57.196250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:31.458 [2024-11-18 11:39:57.196277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:31.458 [2024-11-18 11:39:57.196743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:31.458 [2024-11-18 11:39:57.196784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:31.458 [2024-11-18 11:39:57.196821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:31.458 [2024-11-18 11:39:57.196845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:31.458 [2024-11-18 11:39:57.197296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:31.458 [2024-11-18 11:39:57.197330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:31.458 [2024-11-18 11:39:57.197369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:31.458 [2024-11-18 11:39:57.197395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:31.458 [2024-11-18 11:39:57.197852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:31.458 [2024-11-18 11:39:57.197885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:31.458 [2024-11-18 11:39:57.197919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:31.458 [2024-11-18 11:39:57.197953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:31.458 passed 00:11:31.458 Test: blockdev nvme passthru rw ...passed 00:11:31.458 Test: blockdev nvme passthru vendor specific ...[2024-11-18 11:39:57.280896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:31.458 [2024-11-18 11:39:57.280956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:31.458 [2024-11-18 11:39:57.281211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:31.458 [2024-11-18 11:39:57.281245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:31.458 [2024-11-18 11:39:57.281451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:31.458 [2024-11-18 11:39:57.281484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:31.458 [2024-11-18 11:39:57.281706] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:31.458 [2024-11-18 11:39:57.281738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:31.458 passed 00:11:31.458 Test: blockdev nvme admin passthru ...passed 00:11:31.458 Test: blockdev copy ...passed 00:11:31.458 00:11:31.458 Run Summary: Type Total Ran Passed Failed Inactive 00:11:31.458 suites 1 1 n/a 0 0 00:11:31.458 tests 23 23 23 0 0 00:11:31.458 asserts 152 152 152 0 n/a 00:11:31.458 00:11:31.458 Elapsed time = 1.193 seconds 00:11:32.392 11:39:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:32.392 11:39:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.392 11:39:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:32.392 11:39:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.392 11:39:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:32.392 11:39:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:32.392 11:39:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:32.392 11:39:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:32.392 11:39:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:32.392 11:39:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:32.392 11:39:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:32.392 11:39:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:32.392 rmmod nvme_tcp 00:11:32.392 rmmod nvme_fabrics 00:11:32.392 rmmod nvme_keyring 00:11:32.392 11:39:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:32.392 11:39:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:32.392 11:39:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:32.392 11:39:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2888188 ']' 00:11:32.392 11:39:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2888188 00:11:32.392 11:39:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2888188 ']' 00:11:32.392 11:39:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2888188 00:11:32.392 11:39:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:32.392 11:39:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:32.392 11:39:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2888188 00:11:32.392 11:39:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:32.392 11:39:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:32.392 11:39:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2888188' 00:11:32.392 killing process with pid 2888188 00:11:32.392 11:39:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2888188 00:11:32.392 11:39:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2888188 00:11:33.767 11:39:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:33.767 11:39:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:33.767 11:39:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:33.767 11:39:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:33.767 11:39:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:33.767 11:39:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:33.767 11:39:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:33.767 11:39:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:33.767 11:39:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:33.767 11:39:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.767 11:39:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.767 11:39:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.672 11:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:35.672 00:11:35.672 real 0m9.150s 00:11:35.672 user 0m22.241s 00:11:35.672 sys 0m2.396s 00:11:35.672 11:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.672 11:40:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:35.672 ************************************ 00:11:35.672 END TEST nvmf_bdevio 00:11:35.672 ************************************ 00:11:35.931 11:40:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:35.931 00:11:35.931 real 4m30.984s 00:11:35.931 user 11m53.264s 00:11:35.931 sys 1m9.997s 00:11:35.931 11:40:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.931 11:40:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:35.931 ************************************ 00:11:35.931 END TEST nvmf_target_core 00:11:35.931 ************************************ 00:11:35.931 11:40:01 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:35.931 11:40:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:35.931 11:40:01 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.931 11:40:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:35.931 ************************************ 00:11:35.931 START TEST nvmf_target_extra 00:11:35.931 ************************************ 00:11:35.931 11:40:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:35.931 * Looking for test storage... 00:11:35.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:35.931 11:40:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:35.931 11:40:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:11:35.931 11:40:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:35.931 11:40:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:35.931 11:40:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:35.931 11:40:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:35.931 11:40:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:35.931 11:40:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:35.931 11:40:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:35.931 11:40:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:35.931 11:40:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:35.931 11:40:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:35.931 11:40:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:35.931 11:40:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:35.931 11:40:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:35.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.932 --rc genhtml_branch_coverage=1 00:11:35.932 --rc genhtml_function_coverage=1 00:11:35.932 --rc genhtml_legend=1 00:11:35.932 --rc geninfo_all_blocks=1 00:11:35.932 --rc geninfo_unexecuted_blocks=1 00:11:35.932 00:11:35.932 ' 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:35.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.932 --rc genhtml_branch_coverage=1 00:11:35.932 --rc genhtml_function_coverage=1 00:11:35.932 --rc genhtml_legend=1 00:11:35.932 --rc geninfo_all_blocks=1 00:11:35.932 --rc geninfo_unexecuted_blocks=1 00:11:35.932 00:11:35.932 ' 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:35.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.932 --rc genhtml_branch_coverage=1 00:11:35.932 --rc genhtml_function_coverage=1 00:11:35.932 --rc genhtml_legend=1 00:11:35.932 --rc geninfo_all_blocks=1 00:11:35.932 --rc geninfo_unexecuted_blocks=1 00:11:35.932 00:11:35.932 ' 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:35.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.932 --rc genhtml_branch_coverage=1 00:11:35.932 --rc genhtml_function_coverage=1 00:11:35.932 --rc genhtml_legend=1 00:11:35.932 --rc geninfo_all_blocks=1 00:11:35.932 --rc geninfo_unexecuted_blocks=1 00:11:35.932 00:11:35.932 ' 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:35.932 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:35.932 ************************************ 00:11:35.932 START TEST nvmf_example 00:11:35.932 ************************************ 00:11:35.932 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:36.190 * Looking for test storage... 00:11:36.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:36.190 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:36.190 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:11:36.190 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:36.190 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:36.190 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:36.190 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:36.190 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:36.190 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:36.190 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:36.190 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:36.190 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:36.190 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:36.190 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:36.190 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:36.190 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:36.190 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:36.190 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:36.190 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:36.190 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:36.190 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:36.190 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:36.190 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:36.190 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:36.190 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:36.190 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:36.190 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:36.190 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:36.190 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:36.190 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:36.190 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:36.190 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:36.190 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:36.190 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:36.190 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:36.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.190 --rc genhtml_branch_coverage=1 00:11:36.190 --rc genhtml_function_coverage=1 00:11:36.190 --rc genhtml_legend=1 00:11:36.190 --rc geninfo_all_blocks=1 00:11:36.190 --rc geninfo_unexecuted_blocks=1 00:11:36.190 00:11:36.190 ' 00:11:36.190 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:36.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.190 --rc genhtml_branch_coverage=1 00:11:36.190 --rc genhtml_function_coverage=1 00:11:36.190 --rc genhtml_legend=1 00:11:36.190 --rc geninfo_all_blocks=1 00:11:36.190 --rc geninfo_unexecuted_blocks=1 00:11:36.190 00:11:36.190 ' 00:11:36.190 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:36.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.190 --rc genhtml_branch_coverage=1 00:11:36.190 --rc genhtml_function_coverage=1 00:11:36.190 --rc genhtml_legend=1 00:11:36.190 --rc geninfo_all_blocks=1 00:11:36.190 --rc geninfo_unexecuted_blocks=1 00:11:36.190 00:11:36.190 ' 00:11:36.190 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:36.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.190 --rc genhtml_branch_coverage=1 00:11:36.190 --rc genhtml_function_coverage=1 00:11:36.190 --rc genhtml_legend=1 00:11:36.190 --rc geninfo_all_blocks=1 00:11:36.190 --rc geninfo_unexecuted_blocks=1 00:11:36.190 00:11:36.190 ' 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:36.191 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:36.191 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:38.723 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:38.723 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:38.723 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:38.723 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:38.723 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:38.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:38.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:11:38.724 00:11:38.724 --- 10.0.0.2 ping statistics --- 00:11:38.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.724 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:38.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:38.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:11:38.724 00:11:38.724 --- 10.0.0.1 ping statistics --- 00:11:38.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.724 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2890753 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2890753 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2890753 ']' 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:38.724 11:40:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:39.657 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:39.657 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:39.657 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:39.657 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:39.657 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:39.657 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:39.657 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.657 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:39.657 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.657 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:39.657 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.657 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:39.657 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.657 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:39.657 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:39.657 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.657 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:39.657 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.657 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:39.657 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:39.657 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.657 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:39.657 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.657 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:39.657 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.657 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:39.657 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.657 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:39.657 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:51.852 Initializing NVMe Controllers 00:11:51.852 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:51.852 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:51.852 Initialization complete. Launching workers. 00:11:51.852 ======================================================== 00:11:51.852 Latency(us) 00:11:51.852 Device Information : IOPS MiB/s Average min max 00:11:51.852 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11806.77 46.12 5419.93 1305.95 15569.12 00:11:51.852 ======================================================== 00:11:51.852 Total : 11806.77 46.12 5419.93 1305.95 15569.12 00:11:51.852 00:11:51.852 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:51.852 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:51.852 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:51.853 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:51.853 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:51.853 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:51.853 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:51.853 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:51.853 rmmod nvme_tcp 00:11:51.853 rmmod nvme_fabrics 00:11:51.853 rmmod nvme_keyring 00:11:51.853 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:51.853 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:51.853 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:51.853 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2890753 ']' 00:11:51.853 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2890753 00:11:51.853 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2890753 ']' 00:11:51.853 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2890753 00:11:51.853 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:51.853 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:51.853 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2890753 00:11:51.853 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:51.853 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:51.853 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2890753' 00:11:51.853 killing process with pid 2890753 00:11:51.853 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2890753 00:11:51.853 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2890753 00:11:51.853 nvmf threads initialize successfully 00:11:51.853 bdev subsystem init successfully 00:11:51.853 created a nvmf target service 00:11:51.853 create targets's poll groups done 00:11:51.853 all subsystems of target started 00:11:51.853 nvmf target is running 00:11:51.853 all subsystems of target stopped 00:11:51.853 destroy targets's poll groups done 00:11:51.853 destroyed the nvmf target service 00:11:51.853 bdev subsystem finish successfully 00:11:51.853 nvmf threads destroy successfully 00:11:51.853 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:51.853 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:51.853 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:51.853 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:51.853 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:51.853 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:51.853 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:51.853 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:51.853 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:51.853 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.853 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:51.853 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.758 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:53.758 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:53.758 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:53.758 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:53.758 00:11:53.758 real 0m17.364s 00:11:53.758 user 0m48.050s 00:11:53.758 sys 0m3.607s 00:11:53.758 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.758 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:53.758 ************************************ 00:11:53.758 END TEST nvmf_example 00:11:53.758 ************************************ 00:11:53.758 11:40:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:53.758 11:40:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:53.758 11:40:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:53.758 11:40:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:53.758 ************************************ 00:11:53.758 START TEST nvmf_filesystem 00:11:53.758 ************************************ 00:11:53.758 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:53.758 * Looking for test storage... 00:11:53.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:53.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.759 --rc genhtml_branch_coverage=1 00:11:53.759 --rc genhtml_function_coverage=1 00:11:53.759 --rc genhtml_legend=1 00:11:53.759 --rc geninfo_all_blocks=1 00:11:53.759 --rc geninfo_unexecuted_blocks=1 00:11:53.759 00:11:53.759 ' 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:53.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.759 --rc genhtml_branch_coverage=1 00:11:53.759 --rc genhtml_function_coverage=1 00:11:53.759 --rc genhtml_legend=1 00:11:53.759 --rc geninfo_all_blocks=1 00:11:53.759 --rc geninfo_unexecuted_blocks=1 00:11:53.759 00:11:53.759 ' 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:53.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.759 --rc genhtml_branch_coverage=1 00:11:53.759 --rc genhtml_function_coverage=1 00:11:53.759 --rc genhtml_legend=1 00:11:53.759 --rc geninfo_all_blocks=1 00:11:53.759 --rc geninfo_unexecuted_blocks=1 00:11:53.759 00:11:53.759 ' 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:53.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.759 --rc genhtml_branch_coverage=1 00:11:53.759 --rc genhtml_function_coverage=1 00:11:53.759 --rc genhtml_legend=1 00:11:53.759 --rc geninfo_all_blocks=1 00:11:53.759 --rc geninfo_unexecuted_blocks=1 00:11:53.759 00:11:53.759 ' 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:53.759 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:53.760 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:53.760 #define SPDK_CONFIG_H 00:11:53.760 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:53.760 #define SPDK_CONFIG_APPS 1 00:11:53.760 #define SPDK_CONFIG_ARCH native 00:11:53.760 #define SPDK_CONFIG_ASAN 1 00:11:53.760 #undef SPDK_CONFIG_AVAHI 00:11:53.760 #undef SPDK_CONFIG_CET 00:11:53.760 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:53.760 #define SPDK_CONFIG_COVERAGE 1 00:11:53.760 #define SPDK_CONFIG_CROSS_PREFIX 00:11:53.760 #undef SPDK_CONFIG_CRYPTO 00:11:53.760 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:53.760 #undef SPDK_CONFIG_CUSTOMOCF 00:11:53.760 #undef SPDK_CONFIG_DAOS 00:11:53.760 #define SPDK_CONFIG_DAOS_DIR 00:11:53.760 #define SPDK_CONFIG_DEBUG 1 00:11:53.760 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:53.760 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:53.760 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:53.760 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:53.760 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:53.760 #undef SPDK_CONFIG_DPDK_UADK 00:11:53.760 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:53.760 #define SPDK_CONFIG_EXAMPLES 1 00:11:53.760 #undef SPDK_CONFIG_FC 00:11:53.760 #define SPDK_CONFIG_FC_PATH 00:11:53.760 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:53.760 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:53.760 #define SPDK_CONFIG_FSDEV 1 00:11:53.760 #undef SPDK_CONFIG_FUSE 00:11:53.760 #undef SPDK_CONFIG_FUZZER 00:11:53.760 #define SPDK_CONFIG_FUZZER_LIB 00:11:53.760 #undef SPDK_CONFIG_GOLANG 00:11:53.760 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:53.760 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:53.760 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:53.760 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:53.760 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:53.760 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:53.760 #undef SPDK_CONFIG_HAVE_LZ4 00:11:53.760 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:53.760 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:53.760 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:53.760 #define SPDK_CONFIG_IDXD 1 00:11:53.760 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:53.760 #undef SPDK_CONFIG_IPSEC_MB 00:11:53.760 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:53.760 #define SPDK_CONFIG_ISAL 1 00:11:53.760 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:53.760 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:53.760 #define SPDK_CONFIG_LIBDIR 00:11:53.760 #undef SPDK_CONFIG_LTO 00:11:53.760 #define SPDK_CONFIG_MAX_LCORES 128 00:11:53.760 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:53.760 #define SPDK_CONFIG_NVME_CUSE 1 00:11:53.760 #undef SPDK_CONFIG_OCF 00:11:53.760 #define SPDK_CONFIG_OCF_PATH 00:11:53.760 #define SPDK_CONFIG_OPENSSL_PATH 00:11:53.760 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:53.760 #define SPDK_CONFIG_PGO_DIR 00:11:53.760 #undef SPDK_CONFIG_PGO_USE 00:11:53.760 #define SPDK_CONFIG_PREFIX /usr/local 00:11:53.760 #undef SPDK_CONFIG_RAID5F 00:11:53.760 #undef SPDK_CONFIG_RBD 00:11:53.760 #define SPDK_CONFIG_RDMA 1 00:11:53.760 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:53.760 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:53.760 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:53.760 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:53.760 #define SPDK_CONFIG_SHARED 1 00:11:53.760 #undef SPDK_CONFIG_SMA 00:11:53.760 #define SPDK_CONFIG_TESTS 1 00:11:53.760 #undef SPDK_CONFIG_TSAN 00:11:53.761 #define SPDK_CONFIG_UBLK 1 00:11:53.761 #define SPDK_CONFIG_UBSAN 1 00:11:53.761 #undef SPDK_CONFIG_UNIT_TESTS 00:11:53.761 #undef SPDK_CONFIG_URING 00:11:53.761 #define SPDK_CONFIG_URING_PATH 00:11:53.761 #undef SPDK_CONFIG_URING_ZNS 00:11:53.761 #undef SPDK_CONFIG_USDT 00:11:53.761 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:53.761 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:53.761 #undef SPDK_CONFIG_VFIO_USER 00:11:53.761 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:53.761 #define SPDK_CONFIG_VHOST 1 00:11:53.761 #define SPDK_CONFIG_VIRTIO 1 00:11:53.761 #undef SPDK_CONFIG_VTUNE 00:11:53.761 #define SPDK_CONFIG_VTUNE_DIR 00:11:53.761 #define SPDK_CONFIG_WERROR 1 00:11:53.761 #define SPDK_CONFIG_WPDK_DIR 00:11:53.761 #undef SPDK_CONFIG_XNVME 00:11:53.761 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:53.761 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:53.762 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2892587 ]] 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2892587 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.XewPP0 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.XewPP0/tests/target /tmp/spdk.XewPP0 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:53.763 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=55049646080 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61988532224 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6938886144 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30982897664 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994264064 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12375269376 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12397707264 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22437888 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30993858560 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994268160 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=409600 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6198837248 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6198849536 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:53.764 * Looking for test storage... 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=55049646080 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9153478656 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:53.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:53.764 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:53.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.765 --rc genhtml_branch_coverage=1 00:11:53.765 --rc genhtml_function_coverage=1 00:11:53.765 --rc genhtml_legend=1 00:11:53.765 --rc geninfo_all_blocks=1 00:11:53.765 --rc geninfo_unexecuted_blocks=1 00:11:53.765 00:11:53.765 ' 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:53.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.765 --rc genhtml_branch_coverage=1 00:11:53.765 --rc genhtml_function_coverage=1 00:11:53.765 --rc genhtml_legend=1 00:11:53.765 --rc geninfo_all_blocks=1 00:11:53.765 --rc geninfo_unexecuted_blocks=1 00:11:53.765 00:11:53.765 ' 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:53.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.765 --rc genhtml_branch_coverage=1 00:11:53.765 --rc genhtml_function_coverage=1 00:11:53.765 --rc genhtml_legend=1 00:11:53.765 --rc geninfo_all_blocks=1 00:11:53.765 --rc geninfo_unexecuted_blocks=1 00:11:53.765 00:11:53.765 ' 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:53.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.765 --rc genhtml_branch_coverage=1 00:11:53.765 --rc genhtml_function_coverage=1 00:11:53.765 --rc genhtml_legend=1 00:11:53.765 --rc geninfo_all_blocks=1 00:11:53.765 --rc geninfo_unexecuted_blocks=1 00:11:53.765 00:11:53.765 ' 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:53.765 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:53.765 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:56.296 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:56.296 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:56.296 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:56.297 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:56.297 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:56.297 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:56.297 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:11:56.297 00:11:56.297 --- 10.0.0.2 ping statistics --- 00:11:56.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.297 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:56.297 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:56.297 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:11:56.297 00:11:56.297 --- 10.0.0.1 ping statistics --- 00:11:56.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.297 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:56.297 ************************************ 00:11:56.297 START TEST nvmf_filesystem_no_in_capsule 00:11:56.297 ************************************ 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2894342 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2894342 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2894342 ']' 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:56.297 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.297 [2024-11-18 11:40:21.972768] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:56.297 [2024-11-18 11:40:21.972916] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:56.297 [2024-11-18 11:40:22.123988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:56.556 [2024-11-18 11:40:22.266410] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:56.556 [2024-11-18 11:40:22.266520] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:56.556 [2024-11-18 11:40:22.266549] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:56.556 [2024-11-18 11:40:22.266574] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:56.556 [2024-11-18 11:40:22.266595] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:56.556 [2024-11-18 11:40:22.269482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:56.556 [2024-11-18 11:40:22.269558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:56.556 [2024-11-18 11:40:22.269582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.556 [2024-11-18 11:40:22.269589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:57.122 11:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:57.122 11:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:57.122 11:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:57.122 11:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:57.122 11:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.122 11:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:57.122 11:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:57.122 11:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:57.122 11:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.122 11:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.122 [2024-11-18 11:40:22.966226] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:57.122 11:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.122 11:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:57.122 11:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.122 11:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.688 Malloc1 00:11:57.688 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.688 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:57.688 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.688 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.688 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.688 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:57.688 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.688 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.688 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.688 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:57.688 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.688 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.688 [2024-11-18 11:40:23.563092] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:57.688 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.688 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:57.688 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:57.688 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:57.688 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:57.688 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:57.688 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:57.688 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.688 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.946 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.946 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:57.946 { 00:11:57.946 "name": "Malloc1", 00:11:57.946 "aliases": [ 00:11:57.946 "017c5ecb-8b55-4230-b9d0-2c0821632248" 00:11:57.946 ], 00:11:57.946 "product_name": "Malloc disk", 00:11:57.946 "block_size": 512, 00:11:57.946 "num_blocks": 1048576, 00:11:57.946 "uuid": "017c5ecb-8b55-4230-b9d0-2c0821632248", 00:11:57.946 "assigned_rate_limits": { 00:11:57.946 "rw_ios_per_sec": 0, 00:11:57.946 "rw_mbytes_per_sec": 0, 00:11:57.946 "r_mbytes_per_sec": 0, 00:11:57.946 "w_mbytes_per_sec": 0 00:11:57.946 }, 00:11:57.946 "claimed": true, 00:11:57.946 "claim_type": "exclusive_write", 00:11:57.946 "zoned": false, 00:11:57.946 "supported_io_types": { 00:11:57.946 "read": true, 00:11:57.946 "write": true, 00:11:57.946 "unmap": true, 00:11:57.946 "flush": true, 00:11:57.946 "reset": true, 00:11:57.946 "nvme_admin": false, 00:11:57.946 "nvme_io": false, 00:11:57.946 "nvme_io_md": false, 00:11:57.946 "write_zeroes": true, 00:11:57.946 "zcopy": true, 00:11:57.946 "get_zone_info": false, 00:11:57.946 "zone_management": false, 00:11:57.946 "zone_append": false, 00:11:57.946 "compare": false, 00:11:57.946 "compare_and_write": false, 00:11:57.946 "abort": true, 00:11:57.946 "seek_hole": false, 00:11:57.946 "seek_data": false, 00:11:57.946 "copy": true, 00:11:57.946 "nvme_iov_md": false 00:11:57.946 }, 00:11:57.946 "memory_domains": [ 00:11:57.946 { 00:11:57.946 "dma_device_id": "system", 00:11:57.946 "dma_device_type": 1 00:11:57.946 }, 00:11:57.946 { 00:11:57.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.946 "dma_device_type": 2 00:11:57.946 } 00:11:57.946 ], 00:11:57.946 "driver_specific": {} 00:11:57.946 } 00:11:57.946 ]' 00:11:57.946 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:57.946 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:57.946 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:57.946 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:57.946 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:57.946 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:57.946 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:57.946 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:58.512 11:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:58.512 11:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:58.512 11:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:58.512 11:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:58.512 11:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:00.408 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:00.408 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:00.408 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:00.666 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:00.666 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:00.666 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:00.666 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:00.666 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:00.666 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:00.666 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:00.666 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:00.666 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:00.666 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:00.666 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:00.666 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:00.666 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:00.666 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:00.924 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:01.181 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:02.553 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:02.553 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:02.553 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:02.553 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.553 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.553 ************************************ 00:12:02.553 START TEST filesystem_ext4 00:12:02.553 ************************************ 00:12:02.553 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:02.553 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:02.553 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:02.553 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:02.553 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:02.553 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:02.553 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:02.553 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:02.553 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:02.553 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:02.553 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:02.553 mke2fs 1.47.0 (5-Feb-2023) 00:12:02.553 Discarding device blocks: 0/522240 done 00:12:02.553 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:02.553 Filesystem UUID: cb794c86-25ea-4878-862a-9f1963d553bd 00:12:02.553 Superblock backups stored on blocks: 00:12:02.553 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:02.553 00:12:02.553 Allocating group tables: 0/64 done 00:12:02.553 Writing inode tables: 0/64 done 00:12:05.169 Creating journal (8192 blocks): done 00:12:07.475 Writing superblocks and filesystem accounting information: 0/6428/64 done 00:12:07.475 00:12:07.475 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:07.475 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:14.029 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:14.029 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:14.029 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:14.029 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:14.029 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:14.029 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:14.029 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2894342 00:12:14.029 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:14.029 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:14.029 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:14.030 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:14.030 00:12:14.030 real 0m10.679s 00:12:14.030 user 0m0.020s 00:12:14.030 sys 0m0.067s 00:12:14.030 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.030 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:14.030 ************************************ 00:12:14.030 END TEST filesystem_ext4 00:12:14.030 ************************************ 00:12:14.030 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:14.030 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:14.030 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.030 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.030 ************************************ 00:12:14.030 START TEST filesystem_btrfs 00:12:14.030 ************************************ 00:12:14.030 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:14.030 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:14.030 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:14.030 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:14.030 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:14.030 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:14.030 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:14.030 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:14.030 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:14.030 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:14.030 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:14.030 btrfs-progs v6.8.1 00:12:14.030 See https://btrfs.readthedocs.io for more information. 00:12:14.030 00:12:14.030 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:14.030 NOTE: several default settings have changed in version 5.15, please make sure 00:12:14.030 this does not affect your deployments: 00:12:14.030 - DUP for metadata (-m dup) 00:12:14.030 - enabled no-holes (-O no-holes) 00:12:14.030 - enabled free-space-tree (-R free-space-tree) 00:12:14.030 00:12:14.030 Label: (null) 00:12:14.030 UUID: 7e2225d1-b179-409e-9a18-dfc4337339f5 00:12:14.030 Node size: 16384 00:12:14.030 Sector size: 4096 (CPU page size: 4096) 00:12:14.030 Filesystem size: 510.00MiB 00:12:14.030 Block group profiles: 00:12:14.030 Data: single 8.00MiB 00:12:14.030 Metadata: DUP 32.00MiB 00:12:14.030 System: DUP 8.00MiB 00:12:14.030 SSD detected: yes 00:12:14.030 Zoned device: no 00:12:14.030 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:14.030 Checksum: crc32c 00:12:14.030 Number of devices: 1 00:12:14.030 Devices: 00:12:14.030 ID SIZE PATH 00:12:14.030 1 510.00MiB /dev/nvme0n1p1 00:12:14.030 00:12:14.030 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:14.030 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:14.289 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:14.289 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:14.289 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:14.289 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:14.289 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:14.289 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:14.289 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2894342 00:12:14.289 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:14.289 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:14.289 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:14.289 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:14.289 00:12:14.289 real 0m1.375s 00:12:14.289 user 0m0.025s 00:12:14.289 sys 0m0.103s 00:12:14.289 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.289 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:14.289 ************************************ 00:12:14.289 END TEST filesystem_btrfs 00:12:14.289 ************************************ 00:12:14.545 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:14.545 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:14.545 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.545 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.545 ************************************ 00:12:14.545 START TEST filesystem_xfs 00:12:14.545 ************************************ 00:12:14.545 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:14.545 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:14.545 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:14.545 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:14.545 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:14.545 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:14.545 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:14.545 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:14.545 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:14.545 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:14.545 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:14.803 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:14.803 = sectsz=512 attr=2, projid32bit=1 00:12:14.803 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:14.803 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:14.803 data = bsize=4096 blocks=130560, imaxpct=25 00:12:14.803 = sunit=0 swidth=0 blks 00:12:14.803 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:14.803 log =internal log bsize=4096 blocks=16384, version=2 00:12:14.803 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:14.803 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:15.367 Discarding blocks...Done. 00:12:15.367 11:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:15.367 11:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:17.266 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:17.266 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:17.266 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:17.266 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:17.266 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:17.266 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:17.266 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2894342 00:12:17.266 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:17.266 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:17.266 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:17.266 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:17.266 00:12:17.266 real 0m2.871s 00:12:17.266 user 0m0.015s 00:12:17.266 sys 0m0.065s 00:12:17.266 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.266 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:17.266 ************************************ 00:12:17.266 END TEST filesystem_xfs 00:12:17.266 ************************************ 00:12:17.266 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:17.524 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:17.524 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:17.524 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.524 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:17.524 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:17.524 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:17.524 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:17.524 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:17.524 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:17.524 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:17.524 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:17.524 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.524 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.524 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.524 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:17.524 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2894342 00:12:17.524 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2894342 ']' 00:12:17.524 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2894342 00:12:17.524 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:17.524 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:17.524 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2894342 00:12:17.524 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:17.524 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:17.525 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2894342' 00:12:17.525 killing process with pid 2894342 00:12:17.525 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2894342 00:12:17.525 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2894342 00:12:20.050 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:20.050 00:12:20.050 real 0m23.849s 00:12:20.050 user 1m30.567s 00:12:20.050 sys 0m2.959s 00:12:20.050 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.050 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.050 ************************************ 00:12:20.050 END TEST nvmf_filesystem_no_in_capsule 00:12:20.050 ************************************ 00:12:20.050 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:20.050 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:20.050 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.050 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:20.050 ************************************ 00:12:20.050 START TEST nvmf_filesystem_in_capsule 00:12:20.050 ************************************ 00:12:20.050 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:20.050 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:20.050 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:20.050 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:20.050 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:20.050 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.050 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2897376 00:12:20.050 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:20.050 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2897376 00:12:20.050 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2897376 ']' 00:12:20.050 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.050 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:20.050 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.050 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:20.050 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.050 [2024-11-18 11:40:45.878187] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:20.050 [2024-11-18 11:40:45.878349] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:20.308 [2024-11-18 11:40:46.026969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:20.309 [2024-11-18 11:40:46.165529] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:20.309 [2024-11-18 11:40:46.165606] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:20.309 [2024-11-18 11:40:46.165632] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:20.309 [2024-11-18 11:40:46.165655] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:20.309 [2024-11-18 11:40:46.165680] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:20.309 [2024-11-18 11:40:46.168459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.309 [2024-11-18 11:40:46.168538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:20.309 [2024-11-18 11:40:46.168627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.309 [2024-11-18 11:40:46.168633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.242 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:21.242 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:21.242 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:21.242 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:21.242 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:21.242 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:21.242 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:21.242 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:21.242 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.242 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:21.242 [2024-11-18 11:40:46.905499] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:21.242 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.242 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:21.242 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.242 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:21.808 Malloc1 00:12:21.808 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.808 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:21.808 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.808 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:21.808 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.808 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:21.808 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.808 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:21.808 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.808 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:21.808 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.808 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:21.808 [2024-11-18 11:40:47.521171] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:21.808 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.808 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:21.808 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:21.808 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:21.808 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:21.808 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:21.808 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:21.808 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.808 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:21.808 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.808 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:21.808 { 00:12:21.808 "name": "Malloc1", 00:12:21.808 "aliases": [ 00:12:21.808 "68f046e6-b53f-4e67-b9a8-b9e55dabd4b9" 00:12:21.808 ], 00:12:21.808 "product_name": "Malloc disk", 00:12:21.808 "block_size": 512, 00:12:21.808 "num_blocks": 1048576, 00:12:21.808 "uuid": "68f046e6-b53f-4e67-b9a8-b9e55dabd4b9", 00:12:21.808 "assigned_rate_limits": { 00:12:21.808 "rw_ios_per_sec": 0, 00:12:21.808 "rw_mbytes_per_sec": 0, 00:12:21.808 "r_mbytes_per_sec": 0, 00:12:21.808 "w_mbytes_per_sec": 0 00:12:21.808 }, 00:12:21.808 "claimed": true, 00:12:21.808 "claim_type": "exclusive_write", 00:12:21.808 "zoned": false, 00:12:21.808 "supported_io_types": { 00:12:21.808 "read": true, 00:12:21.808 "write": true, 00:12:21.808 "unmap": true, 00:12:21.808 "flush": true, 00:12:21.808 "reset": true, 00:12:21.808 "nvme_admin": false, 00:12:21.808 "nvme_io": false, 00:12:21.808 "nvme_io_md": false, 00:12:21.808 "write_zeroes": true, 00:12:21.808 "zcopy": true, 00:12:21.808 "get_zone_info": false, 00:12:21.808 "zone_management": false, 00:12:21.808 "zone_append": false, 00:12:21.808 "compare": false, 00:12:21.808 "compare_and_write": false, 00:12:21.808 "abort": true, 00:12:21.808 "seek_hole": false, 00:12:21.808 "seek_data": false, 00:12:21.808 "copy": true, 00:12:21.808 "nvme_iov_md": false 00:12:21.808 }, 00:12:21.808 "memory_domains": [ 00:12:21.808 { 00:12:21.808 "dma_device_id": "system", 00:12:21.808 "dma_device_type": 1 00:12:21.808 }, 00:12:21.808 { 00:12:21.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.808 "dma_device_type": 2 00:12:21.808 } 00:12:21.808 ], 00:12:21.808 "driver_specific": {} 00:12:21.808 } 00:12:21.808 ]' 00:12:21.808 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:21.808 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:21.808 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:21.808 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:21.808 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:21.808 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:21.808 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:21.808 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:22.742 11:40:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:22.742 11:40:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:22.742 11:40:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:22.742 11:40:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:22.742 11:40:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:24.638 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:24.638 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:24.638 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:24.638 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:24.638 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:24.638 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:24.638 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:24.638 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:24.638 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:24.638 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:24.638 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:24.638 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:24.638 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:24.638 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:24.638 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:24.638 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:24.638 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:24.895 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:25.460 11:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:26.393 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:26.393 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:26.393 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:26.393 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.393 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:26.393 ************************************ 00:12:26.393 START TEST filesystem_in_capsule_ext4 00:12:26.393 ************************************ 00:12:26.393 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:26.393 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:26.393 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:26.393 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:26.393 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:26.393 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:26.393 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:26.393 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:26.393 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:26.393 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:26.393 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:26.393 mke2fs 1.47.0 (5-Feb-2023) 00:12:26.650 Discarding device blocks: 0/522240 done 00:12:26.650 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:26.650 Filesystem UUID: 9f3eec01-8ced-423c-8042-be5d592dd476 00:12:26.650 Superblock backups stored on blocks: 00:12:26.650 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:26.650 00:12:26.650 Allocating group tables: 0/64 done 00:12:26.650 Writing inode tables: 0/64 done 00:12:26.650 Creating journal (8192 blocks): done 00:12:26.907 Writing superblocks and filesystem accounting information: 0/64 done 00:12:26.907 00:12:26.907 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:26.907 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:33.473 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:33.473 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:33.473 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:33.473 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:33.473 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:33.473 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:33.473 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2897376 00:12:33.473 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:33.473 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:33.473 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:33.473 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:33.473 00:12:33.473 real 0m6.093s 00:12:33.473 user 0m0.025s 00:12:33.473 sys 0m0.059s 00:12:33.473 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:33.474 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:33.474 ************************************ 00:12:33.474 END TEST filesystem_in_capsule_ext4 00:12:33.474 ************************************ 00:12:33.474 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:33.474 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:33.474 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:33.474 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:33.474 ************************************ 00:12:33.474 START TEST filesystem_in_capsule_btrfs 00:12:33.474 ************************************ 00:12:33.474 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:33.474 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:33.474 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:33.474 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:33.474 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:33.474 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:33.474 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:33.474 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:33.474 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:33.475 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:33.475 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:33.475 btrfs-progs v6.8.1 00:12:33.475 See https://btrfs.readthedocs.io for more information. 00:12:33.475 00:12:33.475 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:33.475 NOTE: several default settings have changed in version 5.15, please make sure 00:12:33.475 this does not affect your deployments: 00:12:33.475 - DUP for metadata (-m dup) 00:12:33.475 - enabled no-holes (-O no-holes) 00:12:33.475 - enabled free-space-tree (-R free-space-tree) 00:12:33.475 00:12:33.475 Label: (null) 00:12:33.475 UUID: 737ccbff-ad3a-4ab7-a2f1-ee3ccb8999b2 00:12:33.475 Node size: 16384 00:12:33.475 Sector size: 4096 (CPU page size: 4096) 00:12:33.475 Filesystem size: 510.00MiB 00:12:33.475 Block group profiles: 00:12:33.475 Data: single 8.00MiB 00:12:33.475 Metadata: DUP 32.00MiB 00:12:33.475 System: DUP 8.00MiB 00:12:33.475 SSD detected: yes 00:12:33.475 Zoned device: no 00:12:33.475 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:33.475 Checksum: crc32c 00:12:33.475 Number of devices: 1 00:12:33.475 Devices: 00:12:33.475 ID SIZE PATH 00:12:33.475 1 510.00MiB /dev/nvme0n1p1 00:12:33.475 00:12:33.475 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:33.475 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:33.475 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:33.475 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:33.475 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:33.475 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:33.475 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:33.476 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:33.476 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2897376 00:12:33.476 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:33.476 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:33.476 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:33.476 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:33.476 00:12:33.476 real 0m0.722s 00:12:33.476 user 0m0.015s 00:12:33.476 sys 0m0.104s 00:12:33.476 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:33.476 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:33.476 ************************************ 00:12:33.476 END TEST filesystem_in_capsule_btrfs 00:12:33.476 ************************************ 00:12:33.476 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:33.476 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:33.476 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:33.476 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:33.476 ************************************ 00:12:33.476 START TEST filesystem_in_capsule_xfs 00:12:33.476 ************************************ 00:12:33.476 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:33.476 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:33.476 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:33.476 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:33.477 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:33.477 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:33.477 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:33.477 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:33.477 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:33.477 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:33.477 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:33.477 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:33.477 = sectsz=512 attr=2, projid32bit=1 00:12:33.477 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:33.477 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:33.477 data = bsize=4096 blocks=130560, imaxpct=25 00:12:33.477 = sunit=0 swidth=0 blks 00:12:33.477 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:33.477 log =internal log bsize=4096 blocks=16384, version=2 00:12:33.478 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:33.478 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:34.411 Discarding blocks...Done. 00:12:34.411 11:41:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:34.411 11:41:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:36.937 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:36.937 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:36.937 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:36.937 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:36.937 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:36.937 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:36.937 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2897376 00:12:36.937 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:36.937 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:36.937 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:36.937 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:36.937 00:12:36.937 real 0m3.592s 00:12:36.937 user 0m0.026s 00:12:36.937 sys 0m0.055s 00:12:36.937 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:36.937 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:36.937 ************************************ 00:12:36.937 END TEST filesystem_in_capsule_xfs 00:12:36.937 ************************************ 00:12:36.937 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:36.937 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:36.937 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:37.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.195 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:37.195 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:37.195 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:37.195 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.195 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:37.195 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.195 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:37.195 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:37.195 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.195 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:37.195 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.195 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:37.195 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2897376 00:12:37.195 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2897376 ']' 00:12:37.195 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2897376 00:12:37.195 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:37.195 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:37.195 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2897376 00:12:37.195 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:37.195 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:37.195 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2897376' 00:12:37.195 killing process with pid 2897376 00:12:37.195 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2897376 00:12:37.195 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2897376 00:12:39.719 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:39.719 00:12:39.719 real 0m19.610s 00:12:39.719 user 1m14.016s 00:12:39.719 sys 0m2.628s 00:12:39.719 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.719 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:39.719 ************************************ 00:12:39.719 END TEST nvmf_filesystem_in_capsule 00:12:39.719 ************************************ 00:12:39.719 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:39.719 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:39.719 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:39.719 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:39.719 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:39.719 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:39.719 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:39.719 rmmod nvme_tcp 00:12:39.719 rmmod nvme_fabrics 00:12:39.719 rmmod nvme_keyring 00:12:39.719 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:39.719 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:39.719 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:39.719 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:39.719 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:39.719 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:39.719 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:39.719 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:39.719 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:39.719 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:39.719 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:39.719 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:39.719 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:39.719 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.719 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:39.719 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.623 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:41.623 00:12:41.623 real 0m48.290s 00:12:41.623 user 2m45.684s 00:12:41.623 sys 0m7.349s 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:41.923 ************************************ 00:12:41.923 END TEST nvmf_filesystem 00:12:41.923 ************************************ 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:41.923 ************************************ 00:12:41.923 START TEST nvmf_target_discovery 00:12:41.923 ************************************ 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:41.923 * Looking for test storage... 00:12:41.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:41.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.923 --rc genhtml_branch_coverage=1 00:12:41.923 --rc genhtml_function_coverage=1 00:12:41.923 --rc genhtml_legend=1 00:12:41.923 --rc geninfo_all_blocks=1 00:12:41.923 --rc geninfo_unexecuted_blocks=1 00:12:41.923 00:12:41.923 ' 00:12:41.923 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:41.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.923 --rc genhtml_branch_coverage=1 00:12:41.923 --rc genhtml_function_coverage=1 00:12:41.923 --rc genhtml_legend=1 00:12:41.923 --rc geninfo_all_blocks=1 00:12:41.923 --rc geninfo_unexecuted_blocks=1 00:12:41.923 00:12:41.923 ' 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:41.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.924 --rc genhtml_branch_coverage=1 00:12:41.924 --rc genhtml_function_coverage=1 00:12:41.924 --rc genhtml_legend=1 00:12:41.924 --rc geninfo_all_blocks=1 00:12:41.924 --rc geninfo_unexecuted_blocks=1 00:12:41.924 00:12:41.924 ' 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:41.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.924 --rc genhtml_branch_coverage=1 00:12:41.924 --rc genhtml_function_coverage=1 00:12:41.924 --rc genhtml_legend=1 00:12:41.924 --rc geninfo_all_blocks=1 00:12:41.924 --rc geninfo_unexecuted_blocks=1 00:12:41.924 00:12:41.924 ' 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:41.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:41.924 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:43.848 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:43.848 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:43.848 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:43.849 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:43.849 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:44.107 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:44.107 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:44.107 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:44.107 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:44.107 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:44.107 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:44.107 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:44.107 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:44.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:44.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:12:44.107 00:12:44.107 --- 10.0.0.2 ping statistics --- 00:12:44.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.107 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:12:44.107 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:44.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:44.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:12:44.107 00:12:44.107 --- 10.0.0.1 ping statistics --- 00:12:44.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.107 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:12:44.107 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:44.107 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:44.107 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:44.107 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:44.107 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:44.107 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:44.107 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:44.107 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:44.107 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:44.107 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:44.107 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:44.107 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:44.107 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:44.107 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2901810 00:12:44.107 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2901810 00:12:44.107 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:44.107 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2901810 ']' 00:12:44.107 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.107 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:44.107 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.107 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:44.107 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:44.107 [2024-11-18 11:41:09.967439] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:44.107 [2024-11-18 11:41:09.967599] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:44.365 [2024-11-18 11:41:10.120423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:44.622 [2024-11-18 11:41:10.267643] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:44.622 [2024-11-18 11:41:10.267735] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:44.622 [2024-11-18 11:41:10.267772] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:44.622 [2024-11-18 11:41:10.267801] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:44.622 [2024-11-18 11:41:10.267823] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:44.622 [2024-11-18 11:41:10.270693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.622 [2024-11-18 11:41:10.270773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:44.622 [2024-11-18 11:41:10.270867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.622 [2024-11-18 11:41:10.270870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:45.188 11:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:45.188 11:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:45.188 11:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:45.188 11:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:45.188 11:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.188 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:45.188 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:45.188 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.188 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.188 [2024-11-18 11:41:11.011320] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:45.188 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.188 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:45.188 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:45.188 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:45.188 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.188 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.188 Null1 00:12:45.188 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.188 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:45.188 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.188 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.188 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.188 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:45.188 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.188 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.188 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.188 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.188 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.188 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.188 [2024-11-18 11:41:11.061856] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.188 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.188 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:45.188 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:45.188 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.188 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.188 Null2 00:12:45.188 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.188 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:45.188 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.188 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.446 Null3 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.446 Null4 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.446 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.447 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.447 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:45.447 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.447 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.447 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.447 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:45.447 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.447 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.447 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.447 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:12:45.705 00:12:45.705 Discovery Log Number of Records 6, Generation counter 6 00:12:45.705 =====Discovery Log Entry 0====== 00:12:45.705 trtype: tcp 00:12:45.705 adrfam: ipv4 00:12:45.705 subtype: current discovery subsystem 00:12:45.705 treq: not required 00:12:45.705 portid: 0 00:12:45.705 trsvcid: 4420 00:12:45.705 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:45.705 traddr: 10.0.0.2 00:12:45.705 eflags: explicit discovery connections, duplicate discovery information 00:12:45.705 sectype: none 00:12:45.705 =====Discovery Log Entry 1====== 00:12:45.705 trtype: tcp 00:12:45.705 adrfam: ipv4 00:12:45.705 subtype: nvme subsystem 00:12:45.705 treq: not required 00:12:45.705 portid: 0 00:12:45.705 trsvcid: 4420 00:12:45.705 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:45.705 traddr: 10.0.0.2 00:12:45.705 eflags: none 00:12:45.705 sectype: none 00:12:45.705 =====Discovery Log Entry 2====== 00:12:45.705 trtype: tcp 00:12:45.705 adrfam: ipv4 00:12:45.705 subtype: nvme subsystem 00:12:45.705 treq: not required 00:12:45.705 portid: 0 00:12:45.705 trsvcid: 4420 00:12:45.705 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:45.705 traddr: 10.0.0.2 00:12:45.705 eflags: none 00:12:45.705 sectype: none 00:12:45.705 =====Discovery Log Entry 3====== 00:12:45.705 trtype: tcp 00:12:45.705 adrfam: ipv4 00:12:45.705 subtype: nvme subsystem 00:12:45.705 treq: not required 00:12:45.705 portid: 0 00:12:45.705 trsvcid: 4420 00:12:45.705 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:45.705 traddr: 10.0.0.2 00:12:45.705 eflags: none 00:12:45.705 sectype: none 00:12:45.705 =====Discovery Log Entry 4====== 00:12:45.705 trtype: tcp 00:12:45.705 adrfam: ipv4 00:12:45.705 subtype: nvme subsystem 00:12:45.705 treq: not required 00:12:45.705 portid: 0 00:12:45.705 trsvcid: 4420 00:12:45.705 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:45.705 traddr: 10.0.0.2 00:12:45.705 eflags: none 00:12:45.705 sectype: none 00:12:45.705 =====Discovery Log Entry 5====== 00:12:45.705 trtype: tcp 00:12:45.705 adrfam: ipv4 00:12:45.705 subtype: discovery subsystem referral 00:12:45.705 treq: not required 00:12:45.705 portid: 0 00:12:45.705 trsvcid: 4430 00:12:45.705 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:45.705 traddr: 10.0.0.2 00:12:45.705 eflags: none 00:12:45.705 sectype: none 00:12:45.705 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:45.705 Perform nvmf subsystem discovery via RPC 00:12:45.705 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:45.705 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.705 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.705 [ 00:12:45.705 { 00:12:45.705 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:45.705 "subtype": "Discovery", 00:12:45.705 "listen_addresses": [ 00:12:45.705 { 00:12:45.705 "trtype": "TCP", 00:12:45.705 "adrfam": "IPv4", 00:12:45.705 "traddr": "10.0.0.2", 00:12:45.705 "trsvcid": "4420" 00:12:45.705 } 00:12:45.705 ], 00:12:45.705 "allow_any_host": true, 00:12:45.706 "hosts": [] 00:12:45.706 }, 00:12:45.706 { 00:12:45.706 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:45.706 "subtype": "NVMe", 00:12:45.706 "listen_addresses": [ 00:12:45.706 { 00:12:45.706 "trtype": "TCP", 00:12:45.706 "adrfam": "IPv4", 00:12:45.706 "traddr": "10.0.0.2", 00:12:45.706 "trsvcid": "4420" 00:12:45.706 } 00:12:45.706 ], 00:12:45.706 "allow_any_host": true, 00:12:45.706 "hosts": [], 00:12:45.706 "serial_number": "SPDK00000000000001", 00:12:45.706 "model_number": "SPDK bdev Controller", 00:12:45.706 "max_namespaces": 32, 00:12:45.706 "min_cntlid": 1, 00:12:45.706 "max_cntlid": 65519, 00:12:45.706 "namespaces": [ 00:12:45.706 { 00:12:45.706 "nsid": 1, 00:12:45.706 "bdev_name": "Null1", 00:12:45.706 "name": "Null1", 00:12:45.706 "nguid": "CD4BE259BCEB4B61913DDB2DC2004021", 00:12:45.706 "uuid": "cd4be259-bceb-4b61-913d-db2dc2004021" 00:12:45.706 } 00:12:45.706 ] 00:12:45.706 }, 00:12:45.706 { 00:12:45.706 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:45.706 "subtype": "NVMe", 00:12:45.706 "listen_addresses": [ 00:12:45.706 { 00:12:45.706 "trtype": "TCP", 00:12:45.706 "adrfam": "IPv4", 00:12:45.706 "traddr": "10.0.0.2", 00:12:45.706 "trsvcid": "4420" 00:12:45.706 } 00:12:45.706 ], 00:12:45.706 "allow_any_host": true, 00:12:45.706 "hosts": [], 00:12:45.706 "serial_number": "SPDK00000000000002", 00:12:45.706 "model_number": "SPDK bdev Controller", 00:12:45.706 "max_namespaces": 32, 00:12:45.706 "min_cntlid": 1, 00:12:45.706 "max_cntlid": 65519, 00:12:45.706 "namespaces": [ 00:12:45.706 { 00:12:45.706 "nsid": 1, 00:12:45.706 "bdev_name": "Null2", 00:12:45.706 "name": "Null2", 00:12:45.706 "nguid": "BCD2B8FED658411995447728B312CD16", 00:12:45.706 "uuid": "bcd2b8fe-d658-4119-9544-7728b312cd16" 00:12:45.706 } 00:12:45.706 ] 00:12:45.706 }, 00:12:45.706 { 00:12:45.706 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:45.706 "subtype": "NVMe", 00:12:45.706 "listen_addresses": [ 00:12:45.706 { 00:12:45.706 "trtype": "TCP", 00:12:45.706 "adrfam": "IPv4", 00:12:45.706 "traddr": "10.0.0.2", 00:12:45.706 "trsvcid": "4420" 00:12:45.706 } 00:12:45.706 ], 00:12:45.706 "allow_any_host": true, 00:12:45.706 "hosts": [], 00:12:45.706 "serial_number": "SPDK00000000000003", 00:12:45.706 "model_number": "SPDK bdev Controller", 00:12:45.706 "max_namespaces": 32, 00:12:45.706 "min_cntlid": 1, 00:12:45.706 "max_cntlid": 65519, 00:12:45.706 "namespaces": [ 00:12:45.706 { 00:12:45.706 "nsid": 1, 00:12:45.706 "bdev_name": "Null3", 00:12:45.706 "name": "Null3", 00:12:45.706 "nguid": "C1418DC4F2E94A99AE553BBB5E22FFBD", 00:12:45.706 "uuid": "c1418dc4-f2e9-4a99-ae55-3bbb5e22ffbd" 00:12:45.706 } 00:12:45.706 ] 00:12:45.706 }, 00:12:45.706 { 00:12:45.706 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:45.706 "subtype": "NVMe", 00:12:45.706 "listen_addresses": [ 00:12:45.706 { 00:12:45.706 "trtype": "TCP", 00:12:45.706 "adrfam": "IPv4", 00:12:45.706 "traddr": "10.0.0.2", 00:12:45.706 "trsvcid": "4420" 00:12:45.706 } 00:12:45.706 ], 00:12:45.706 "allow_any_host": true, 00:12:45.706 "hosts": [], 00:12:45.706 "serial_number": "SPDK00000000000004", 00:12:45.706 "model_number": "SPDK bdev Controller", 00:12:45.706 "max_namespaces": 32, 00:12:45.706 "min_cntlid": 1, 00:12:45.706 "max_cntlid": 65519, 00:12:45.706 "namespaces": [ 00:12:45.706 { 00:12:45.706 "nsid": 1, 00:12:45.706 "bdev_name": "Null4", 00:12:45.706 "name": "Null4", 00:12:45.706 "nguid": "06B52A883C4742958878F61C672C58E8", 00:12:45.706 "uuid": "06b52a88-3c47-4295-8878-f61c672c58e8" 00:12:45.706 } 00:12:45.706 ] 00:12:45.706 } 00:12:45.706 ] 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:45.706 rmmod nvme_tcp 00:12:45.706 rmmod nvme_fabrics 00:12:45.706 rmmod nvme_keyring 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:45.706 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2901810 ']' 00:12:45.707 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2901810 00:12:45.707 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2901810 ']' 00:12:45.707 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2901810 00:12:45.707 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:45.707 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:45.707 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2901810 00:12:45.963 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:45.963 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:45.963 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2901810' 00:12:45.963 killing process with pid 2901810 00:12:45.963 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2901810 00:12:45.963 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2901810 00:12:46.896 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:46.896 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:46.896 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:46.896 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:46.896 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:46.896 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:46.896 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:46.896 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:46.896 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:46.896 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.896 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:46.896 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.429 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:49.429 00:12:49.429 real 0m7.224s 00:12:49.429 user 0m9.770s 00:12:49.429 sys 0m2.132s 00:12:49.429 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:49.429 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.429 ************************************ 00:12:49.429 END TEST nvmf_target_discovery 00:12:49.429 ************************************ 00:12:49.429 11:41:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:49.429 11:41:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:49.429 11:41:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:49.429 11:41:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:49.429 ************************************ 00:12:49.429 START TEST nvmf_referrals 00:12:49.429 ************************************ 00:12:49.429 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:49.429 * Looking for test storage... 00:12:49.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:49.429 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:49.429 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:12:49.429 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:49.429 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:49.429 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:49.429 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:49.429 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:49.429 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:49.429 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:49.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.430 --rc genhtml_branch_coverage=1 00:12:49.430 --rc genhtml_function_coverage=1 00:12:49.430 --rc genhtml_legend=1 00:12:49.430 --rc geninfo_all_blocks=1 00:12:49.430 --rc geninfo_unexecuted_blocks=1 00:12:49.430 00:12:49.430 ' 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:49.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.430 --rc genhtml_branch_coverage=1 00:12:49.430 --rc genhtml_function_coverage=1 00:12:49.430 --rc genhtml_legend=1 00:12:49.430 --rc geninfo_all_blocks=1 00:12:49.430 --rc geninfo_unexecuted_blocks=1 00:12:49.430 00:12:49.430 ' 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:49.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.430 --rc genhtml_branch_coverage=1 00:12:49.430 --rc genhtml_function_coverage=1 00:12:49.430 --rc genhtml_legend=1 00:12:49.430 --rc geninfo_all_blocks=1 00:12:49.430 --rc geninfo_unexecuted_blocks=1 00:12:49.430 00:12:49.430 ' 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:49.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.430 --rc genhtml_branch_coverage=1 00:12:49.430 --rc genhtml_function_coverage=1 00:12:49.430 --rc genhtml_legend=1 00:12:49.430 --rc geninfo_all_blocks=1 00:12:49.430 --rc geninfo_unexecuted_blocks=1 00:12:49.430 00:12:49.430 ' 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:49.430 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:49.430 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:49.431 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.431 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:49.431 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.431 11:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:49.431 11:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:49.431 11:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:49.431 11:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:51.329 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:51.329 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:51.329 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:51.329 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:51.329 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:51.329 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:51.329 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:51.329 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:51.329 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:51.330 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:51.330 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:51.330 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:51.330 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:51.330 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:51.588 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:51.588 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:51.588 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:51.588 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:51.588 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:51.588 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:51.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:51.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:12:51.588 00:12:51.588 --- 10.0.0.2 ping statistics --- 00:12:51.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.588 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:12:51.588 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:51.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:51.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:12:51.588 00:12:51.588 --- 10.0.0.1 ping statistics --- 00:12:51.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.588 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:12:51.588 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:51.588 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:51.588 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:51.588 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:51.588 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:51.588 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:51.588 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:51.588 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:51.588 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:51.588 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:51.588 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:51.588 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:51.588 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:51.588 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2904055 00:12:51.588 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:51.588 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2904055 00:12:51.588 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2904055 ']' 00:12:51.588 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.588 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:51.588 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.588 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:51.588 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:51.589 [2024-11-18 11:41:17.393837] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:51.589 [2024-11-18 11:41:17.393977] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.846 [2024-11-18 11:41:17.549669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:51.846 [2024-11-18 11:41:17.696681] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.846 [2024-11-18 11:41:17.696776] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.846 [2024-11-18 11:41:17.696803] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.846 [2024-11-18 11:41:17.696829] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.846 [2024-11-18 11:41:17.696850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.846 [2024-11-18 11:41:17.699731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.846 [2024-11-18 11:41:17.699794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.846 [2024-11-18 11:41:17.699844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.846 [2024-11-18 11:41:17.699851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:52.780 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:52.780 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:52.780 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:52.780 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:52.780 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:52.781 [2024-11-18 11:41:18.415125] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:52.781 [2024-11-18 11:41:18.443778] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:52.781 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:53.039 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:53.039 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:53.039 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:53.039 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.039 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.039 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.039 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:53.039 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.039 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.039 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.039 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:53.039 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.039 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.039 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.039 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:53.039 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:53.039 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.039 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.039 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.039 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:53.039 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:53.039 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:53.039 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:53.039 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:53.039 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:53.039 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:53.297 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:53.297 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:53.297 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:53.297 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.297 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.297 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.297 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:53.297 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.297 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.297 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.297 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:53.297 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:53.297 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:53.297 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:53.297 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.297 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:53.297 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.297 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.297 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:53.297 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:53.297 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:53.297 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:53.297 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:53.297 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:53.297 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:53.297 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:53.554 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:53.554 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:53.554 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:53.554 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:53.554 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:53.554 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:53.554 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:53.554 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:53.554 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:53.554 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:53.554 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:53.554 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:53.554 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:53.811 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:53.811 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:53.811 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.811 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.811 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.811 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:53.811 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:53.811 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:53.811 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:53.811 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.811 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:53.811 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.811 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.811 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:53.811 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:53.811 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:53.811 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:53.811 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:53.811 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:53.811 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:53.811 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:54.068 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:54.068 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:54.068 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:54.068 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:54.068 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:54.068 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:54.068 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:54.326 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:54.326 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:54.326 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:54.326 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:54.326 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:54.326 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:54.326 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:54.326 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:54.326 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.326 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.326 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.326 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:54.326 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:54.326 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.326 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.326 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.583 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:54.583 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:54.583 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:54.583 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:54.583 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:54.583 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:54.583 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:54.583 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:54.583 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:54.583 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:54.583 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:54.583 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:54.583 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:54.583 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:54.583 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:54.583 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:54.583 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:54.583 rmmod nvme_tcp 00:12:54.840 rmmod nvme_fabrics 00:12:54.840 rmmod nvme_keyring 00:12:54.840 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:54.840 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:54.840 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:54.840 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2904055 ']' 00:12:54.840 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2904055 00:12:54.840 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2904055 ']' 00:12:54.840 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2904055 00:12:54.840 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:54.840 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:54.840 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2904055 00:12:54.840 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:54.840 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:54.841 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2904055' 00:12:54.841 killing process with pid 2904055 00:12:54.841 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2904055 00:12:54.841 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2904055 00:12:55.774 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:55.774 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:55.774 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:55.774 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:55.774 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:55.774 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:55.774 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:55.774 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:55.774 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:55.774 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.774 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:55.774 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:58.308 00:12:58.308 real 0m8.865s 00:12:58.308 user 0m16.426s 00:12:58.308 sys 0m2.510s 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:58.308 ************************************ 00:12:58.308 END TEST nvmf_referrals 00:12:58.308 ************************************ 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:58.308 ************************************ 00:12:58.308 START TEST nvmf_connect_disconnect 00:12:58.308 ************************************ 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:58.308 * Looking for test storage... 00:12:58.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:58.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.308 --rc genhtml_branch_coverage=1 00:12:58.308 --rc genhtml_function_coverage=1 00:12:58.308 --rc genhtml_legend=1 00:12:58.308 --rc geninfo_all_blocks=1 00:12:58.308 --rc geninfo_unexecuted_blocks=1 00:12:58.308 00:12:58.308 ' 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:58.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.308 --rc genhtml_branch_coverage=1 00:12:58.308 --rc genhtml_function_coverage=1 00:12:58.308 --rc genhtml_legend=1 00:12:58.308 --rc geninfo_all_blocks=1 00:12:58.308 --rc geninfo_unexecuted_blocks=1 00:12:58.308 00:12:58.308 ' 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:58.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.308 --rc genhtml_branch_coverage=1 00:12:58.308 --rc genhtml_function_coverage=1 00:12:58.308 --rc genhtml_legend=1 00:12:58.308 --rc geninfo_all_blocks=1 00:12:58.308 --rc geninfo_unexecuted_blocks=1 00:12:58.308 00:12:58.308 ' 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:58.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.308 --rc genhtml_branch_coverage=1 00:12:58.308 --rc genhtml_function_coverage=1 00:12:58.308 --rc genhtml_legend=1 00:12:58.308 --rc geninfo_all_blocks=1 00:12:58.308 --rc geninfo_unexecuted_blocks=1 00:12:58.308 00:12:58.308 ' 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:58.308 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.309 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.309 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.309 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.309 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.309 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.309 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:58.309 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.309 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:58.309 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:58.309 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:58.309 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:58.309 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:58.309 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:58.309 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:58.309 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:58.309 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:58.309 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:58.309 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:58.309 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:58.309 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:58.309 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:58.309 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:58.309 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:58.309 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:58.309 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:58.309 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:58.309 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.309 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:58.309 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.309 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:58.309 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:58.309 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:58.309 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:00.208 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:00.208 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:13:00.208 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:00.209 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:00.209 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:00.209 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:00.209 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:00.209 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:00.209 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:00.209 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:00.209 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:00.209 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:00.468 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:00.468 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:00.468 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:00.468 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:00.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:00.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:13:00.468 00:13:00.468 --- 10.0.0.2 ping statistics --- 00:13:00.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.468 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:13:00.468 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:00.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:00.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:13:00.468 00:13:00.468 --- 10.0.0.1 ping statistics --- 00:13:00.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.468 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:13:00.468 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:00.468 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:13:00.468 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:00.468 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:00.468 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:00.468 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:00.468 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:00.468 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:00.468 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:00.468 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:00.468 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:00.468 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:00.468 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:00.468 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2906613 00:13:00.468 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:00.468 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2906613 00:13:00.468 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2906613 ']' 00:13:00.468 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.468 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:00.468 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.468 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:00.468 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:00.468 [2024-11-18 11:41:26.334716] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:00.468 [2024-11-18 11:41:26.334887] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:00.726 [2024-11-18 11:41:26.509682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:00.983 [2024-11-18 11:41:26.658288] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:00.983 [2024-11-18 11:41:26.658366] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:00.984 [2024-11-18 11:41:26.658392] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:00.984 [2024-11-18 11:41:26.658417] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:00.984 [2024-11-18 11:41:26.658437] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:00.984 [2024-11-18 11:41:26.661251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:00.984 [2024-11-18 11:41:26.661329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:00.984 [2024-11-18 11:41:26.661390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.984 [2024-11-18 11:41:26.661396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:01.549 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:01.549 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:13:01.549 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:01.549 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:01.549 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:01.549 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:01.549 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:01.549 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.549 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:01.549 [2024-11-18 11:41:27.358203] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:01.549 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.549 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:01.549 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.549 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:01.807 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.807 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:01.807 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:01.807 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.807 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:01.807 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.807 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:01.807 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.807 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:01.807 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.807 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.807 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.807 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:01.807 [2024-11-18 11:41:27.485026] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.807 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.807 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:13:01.807 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:13:01.807 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:13:01.807 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:04.434 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.813 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.757 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.367 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.304 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.754 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.808 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.346 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.861 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.304 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.829 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.798 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.896 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.808 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:11.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:16.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:23.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:26.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.427 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:35.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.869 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:42.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:44.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:47.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.285 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:51.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:54.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:56.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:56.257 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:56.257 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:56.257 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:56.257 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:56.257 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:56.257 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:56.257 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:56.257 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:56.257 rmmod nvme_tcp 00:16:56.257 rmmod nvme_fabrics 00:16:56.257 rmmod nvme_keyring 00:16:56.257 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:56.257 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:56.257 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:56.257 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2906613 ']' 00:16:56.257 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2906613 00:16:56.257 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2906613 ']' 00:16:56.257 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2906613 00:16:56.257 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:16:56.257 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:56.257 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2906613 00:16:56.257 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:56.257 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:56.257 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2906613' 00:16:56.257 killing process with pid 2906613 00:16:56.257 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2906613 00:16:56.257 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2906613 00:16:57.636 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:57.636 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:57.636 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:57.636 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:57.636 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:16:57.636 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:57.636 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:16:57.636 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:57.636 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:57.636 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.636 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:57.636 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.595 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:59.595 00:16:59.595 real 4m1.678s 00:16:59.595 user 15m13.684s 00:16:59.595 sys 0m38.916s 00:16:59.595 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:59.595 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:59.595 ************************************ 00:16:59.595 END TEST nvmf_connect_disconnect 00:16:59.595 ************************************ 00:16:59.595 11:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:59.595 11:45:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:59.595 11:45:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:59.595 11:45:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:59.854 ************************************ 00:16:59.854 START TEST nvmf_multitarget 00:16:59.854 ************************************ 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:59.854 * Looking for test storage... 00:16:59.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:59.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.854 --rc genhtml_branch_coverage=1 00:16:59.854 --rc genhtml_function_coverage=1 00:16:59.854 --rc genhtml_legend=1 00:16:59.854 --rc geninfo_all_blocks=1 00:16:59.854 --rc geninfo_unexecuted_blocks=1 00:16:59.854 00:16:59.854 ' 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:59.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.854 --rc genhtml_branch_coverage=1 00:16:59.854 --rc genhtml_function_coverage=1 00:16:59.854 --rc genhtml_legend=1 00:16:59.854 --rc geninfo_all_blocks=1 00:16:59.854 --rc geninfo_unexecuted_blocks=1 00:16:59.854 00:16:59.854 ' 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:59.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.854 --rc genhtml_branch_coverage=1 00:16:59.854 --rc genhtml_function_coverage=1 00:16:59.854 --rc genhtml_legend=1 00:16:59.854 --rc geninfo_all_blocks=1 00:16:59.854 --rc geninfo_unexecuted_blocks=1 00:16:59.854 00:16:59.854 ' 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:59.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.854 --rc genhtml_branch_coverage=1 00:16:59.854 --rc genhtml_function_coverage=1 00:16:59.854 --rc genhtml_legend=1 00:16:59.854 --rc geninfo_all_blocks=1 00:16:59.854 --rc geninfo_unexecuted_blocks=1 00:16:59.854 00:16:59.854 ' 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:59.854 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:59.855 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:59.855 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:59.855 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:59.855 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:59.855 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:59.855 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:59.855 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:59.855 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.855 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.855 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.855 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:59.855 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.855 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:59.855 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:59.855 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:59.855 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:59.855 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:59.855 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:59.855 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:59.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:59.855 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:59.855 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:59.855 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:59.855 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:59.855 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:59.855 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:59.855 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:59.855 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:59.855 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:59.855 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:59.855 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.855 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:59.855 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.855 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:59.855 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:59.855 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:59.855 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:01.758 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:01.758 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:01.758 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:01.758 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:01.758 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:02.018 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:02.018 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:02.018 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:02.018 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:02.018 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:02.018 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:02.018 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:02.018 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:02.018 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:02.018 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:17:02.018 00:17:02.018 --- 10.0.0.2 ping statistics --- 00:17:02.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.018 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:17:02.018 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:02.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:02.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:17:02.018 00:17:02.018 --- 10.0.0.1 ping statistics --- 00:17:02.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.018 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:17:02.018 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:02.019 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:17:02.019 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:02.019 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:02.019 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:02.019 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:02.019 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:02.019 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:02.019 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:02.019 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:17:02.019 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:02.019 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:02.019 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:02.019 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2938840 00:17:02.019 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:02.019 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2938840 00:17:02.019 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2938840 ']' 00:17:02.019 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.019 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:02.019 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.019 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:02.019 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:02.019 [2024-11-18 11:45:27.870079] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:02.019 [2024-11-18 11:45:27.870223] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.279 [2024-11-18 11:45:28.018850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:02.279 [2024-11-18 11:45:28.157645] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:02.279 [2024-11-18 11:45:28.157737] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:02.279 [2024-11-18 11:45:28.157763] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:02.279 [2024-11-18 11:45:28.157787] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:02.279 [2024-11-18 11:45:28.157819] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:02.279 [2024-11-18 11:45:28.160756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.279 [2024-11-18 11:45:28.160829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.279 [2024-11-18 11:45:28.160924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.279 [2024-11-18 11:45:28.160930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:03.226 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:03.226 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:17:03.226 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:03.226 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:03.226 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:03.226 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:03.226 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:03.226 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:03.226 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:17:03.226 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:17:03.226 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:17:03.226 "nvmf_tgt_1" 00:17:03.226 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:17:03.484 "nvmf_tgt_2" 00:17:03.484 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:03.484 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:17:03.484 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:17:03.484 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:03.741 true 00:17:03.741 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:03.741 true 00:17:03.741 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:03.741 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:17:04.000 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:04.000 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:04.000 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:17:04.000 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:04.000 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:17:04.000 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:04.000 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:17:04.000 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:04.000 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:04.000 rmmod nvme_tcp 00:17:04.000 rmmod nvme_fabrics 00:17:04.000 rmmod nvme_keyring 00:17:04.000 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:04.000 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:17:04.000 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:17:04.000 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2938840 ']' 00:17:04.000 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2938840 00:17:04.000 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2938840 ']' 00:17:04.000 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2938840 00:17:04.001 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:17:04.001 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:04.001 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2938840 00:17:04.001 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:04.001 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:04.001 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2938840' 00:17:04.001 killing process with pid 2938840 00:17:04.001 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2938840 00:17:04.001 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2938840 00:17:05.381 11:45:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:05.381 11:45:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:05.381 11:45:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:05.381 11:45:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:17:05.381 11:45:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:17:05.381 11:45:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:05.381 11:45:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:17:05.381 11:45:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:05.381 11:45:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:05.381 11:45:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.381 11:45:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:05.381 11:45:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.289 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:07.289 00:17:07.289 real 0m7.445s 00:17:07.289 user 0m12.009s 00:17:07.289 sys 0m2.133s 00:17:07.289 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:07.289 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:07.289 ************************************ 00:17:07.289 END TEST nvmf_multitarget 00:17:07.289 ************************************ 00:17:07.289 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:07.289 11:45:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:07.289 11:45:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:07.289 11:45:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:07.289 ************************************ 00:17:07.289 START TEST nvmf_rpc 00:17:07.289 ************************************ 00:17:07.289 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:07.289 * Looking for test storage... 00:17:07.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:07.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.289 --rc genhtml_branch_coverage=1 00:17:07.289 --rc genhtml_function_coverage=1 00:17:07.289 --rc genhtml_legend=1 00:17:07.289 --rc geninfo_all_blocks=1 00:17:07.289 --rc geninfo_unexecuted_blocks=1 00:17:07.289 00:17:07.289 ' 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:07.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.289 --rc genhtml_branch_coverage=1 00:17:07.289 --rc genhtml_function_coverage=1 00:17:07.289 --rc genhtml_legend=1 00:17:07.289 --rc geninfo_all_blocks=1 00:17:07.289 --rc geninfo_unexecuted_blocks=1 00:17:07.289 00:17:07.289 ' 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:07.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.289 --rc genhtml_branch_coverage=1 00:17:07.289 --rc genhtml_function_coverage=1 00:17:07.289 --rc genhtml_legend=1 00:17:07.289 --rc geninfo_all_blocks=1 00:17:07.289 --rc geninfo_unexecuted_blocks=1 00:17:07.289 00:17:07.289 ' 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:07.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.289 --rc genhtml_branch_coverage=1 00:17:07.289 --rc genhtml_function_coverage=1 00:17:07.289 --rc genhtml_legend=1 00:17:07.289 --rc geninfo_all_blocks=1 00:17:07.289 --rc geninfo_unexecuted_blocks=1 00:17:07.289 00:17:07.289 ' 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:07.289 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:07.290 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:17:07.290 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:09.827 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:09.827 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:09.827 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.827 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:09.827 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:09.828 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:09.828 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:17:09.828 00:17:09.828 --- 10.0.0.2 ping statistics --- 00:17:09.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.828 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:09.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:09.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:17:09.828 00:17:09.828 --- 10.0.0.1 ping statistics --- 00:17:09.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.828 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2941197 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2941197 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2941197 ']' 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:09.828 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.828 [2024-11-18 11:45:35.562032] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:09.828 [2024-11-18 11:45:35.562195] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:10.088 [2024-11-18 11:45:35.736704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:10.088 [2024-11-18 11:45:35.884668] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:10.088 [2024-11-18 11:45:35.884750] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:10.088 [2024-11-18 11:45:35.884776] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:10.088 [2024-11-18 11:45:35.884805] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:10.088 [2024-11-18 11:45:35.884825] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:10.088 [2024-11-18 11:45:35.887885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.088 [2024-11-18 11:45:35.887977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:10.088 [2024-11-18 11:45:35.888031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.088 [2024-11-18 11:45:35.888034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:11.026 "tick_rate": 2700000000, 00:17:11.026 "poll_groups": [ 00:17:11.026 { 00:17:11.026 "name": "nvmf_tgt_poll_group_000", 00:17:11.026 "admin_qpairs": 0, 00:17:11.026 "io_qpairs": 0, 00:17:11.026 "current_admin_qpairs": 0, 00:17:11.026 "current_io_qpairs": 0, 00:17:11.026 "pending_bdev_io": 0, 00:17:11.026 "completed_nvme_io": 0, 00:17:11.026 "transports": [] 00:17:11.026 }, 00:17:11.026 { 00:17:11.026 "name": "nvmf_tgt_poll_group_001", 00:17:11.026 "admin_qpairs": 0, 00:17:11.026 "io_qpairs": 0, 00:17:11.026 "current_admin_qpairs": 0, 00:17:11.026 "current_io_qpairs": 0, 00:17:11.026 "pending_bdev_io": 0, 00:17:11.026 "completed_nvme_io": 0, 00:17:11.026 "transports": [] 00:17:11.026 }, 00:17:11.026 { 00:17:11.026 "name": "nvmf_tgt_poll_group_002", 00:17:11.026 "admin_qpairs": 0, 00:17:11.026 "io_qpairs": 0, 00:17:11.026 "current_admin_qpairs": 0, 00:17:11.026 "current_io_qpairs": 0, 00:17:11.026 "pending_bdev_io": 0, 00:17:11.026 "completed_nvme_io": 0, 00:17:11.026 "transports": [] 00:17:11.026 }, 00:17:11.026 { 00:17:11.026 "name": "nvmf_tgt_poll_group_003", 00:17:11.026 "admin_qpairs": 0, 00:17:11.026 "io_qpairs": 0, 00:17:11.026 "current_admin_qpairs": 0, 00:17:11.026 "current_io_qpairs": 0, 00:17:11.026 "pending_bdev_io": 0, 00:17:11.026 "completed_nvme_io": 0, 00:17:11.026 "transports": [] 00:17:11.026 } 00:17:11.026 ] 00:17:11.026 }' 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.026 [2024-11-18 11:45:36.679908] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:11.026 "tick_rate": 2700000000, 00:17:11.026 "poll_groups": [ 00:17:11.026 { 00:17:11.026 "name": "nvmf_tgt_poll_group_000", 00:17:11.026 "admin_qpairs": 0, 00:17:11.026 "io_qpairs": 0, 00:17:11.026 "current_admin_qpairs": 0, 00:17:11.026 "current_io_qpairs": 0, 00:17:11.026 "pending_bdev_io": 0, 00:17:11.026 "completed_nvme_io": 0, 00:17:11.026 "transports": [ 00:17:11.026 { 00:17:11.026 "trtype": "TCP" 00:17:11.026 } 00:17:11.026 ] 00:17:11.026 }, 00:17:11.026 { 00:17:11.026 "name": "nvmf_tgt_poll_group_001", 00:17:11.026 "admin_qpairs": 0, 00:17:11.026 "io_qpairs": 0, 00:17:11.026 "current_admin_qpairs": 0, 00:17:11.026 "current_io_qpairs": 0, 00:17:11.026 "pending_bdev_io": 0, 00:17:11.026 "completed_nvme_io": 0, 00:17:11.026 "transports": [ 00:17:11.026 { 00:17:11.026 "trtype": "TCP" 00:17:11.026 } 00:17:11.026 ] 00:17:11.026 }, 00:17:11.026 { 00:17:11.026 "name": "nvmf_tgt_poll_group_002", 00:17:11.026 "admin_qpairs": 0, 00:17:11.026 "io_qpairs": 0, 00:17:11.026 "current_admin_qpairs": 0, 00:17:11.026 "current_io_qpairs": 0, 00:17:11.026 "pending_bdev_io": 0, 00:17:11.026 "completed_nvme_io": 0, 00:17:11.026 "transports": [ 00:17:11.026 { 00:17:11.026 "trtype": "TCP" 00:17:11.026 } 00:17:11.026 ] 00:17:11.026 }, 00:17:11.026 { 00:17:11.026 "name": "nvmf_tgt_poll_group_003", 00:17:11.026 "admin_qpairs": 0, 00:17:11.026 "io_qpairs": 0, 00:17:11.026 "current_admin_qpairs": 0, 00:17:11.026 "current_io_qpairs": 0, 00:17:11.026 "pending_bdev_io": 0, 00:17:11.026 "completed_nvme_io": 0, 00:17:11.026 "transports": [ 00:17:11.026 { 00:17:11.026 "trtype": "TCP" 00:17:11.026 } 00:17:11.026 ] 00:17:11.026 } 00:17:11.026 ] 00:17:11.026 }' 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.026 Malloc1 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.026 [2024-11-18 11:45:36.883964] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:11.026 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.027 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:17:11.027 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:11.027 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:17:11.027 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:11.027 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:11.027 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:11.027 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:11.027 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:11.027 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:11.027 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:11.027 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:11.027 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:17:11.027 [2024-11-18 11:45:36.907221] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:17:11.287 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:11.287 could not add new controller: failed to write to nvme-fabrics device 00:17:11.287 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:11.287 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:11.287 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:11.287 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:11.287 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:11.287 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.287 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.287 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.287 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:11.857 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:11.857 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:11.857 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:11.857 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:11.857 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:13.761 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:13.761 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:13.761 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:13.761 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:13.761 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:13.761 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:13.761 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:14.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:14.020 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:14.020 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:14.020 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:14.020 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:14.020 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:14.020 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:14.020 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:14.020 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:14.020 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.020 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.020 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.020 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:14.020 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:14.020 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:14.020 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:14.020 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:14.020 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:14.020 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:14.020 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:14.020 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:14.020 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:14.020 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:14.020 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:14.020 [2024-11-18 11:45:39.853030] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:17:14.020 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:14.020 could not add new controller: failed to write to nvme-fabrics device 00:17:14.020 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:14.020 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:14.020 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:14.020 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:14.020 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:14.020 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.020 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.020 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.020 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:14.958 11:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:14.958 11:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:14.958 11:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:14.958 11:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:14.958 11:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:16.859 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:16.859 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:16.859 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:16.859 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:16.859 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:16.859 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:16.859 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:16.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:16.859 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:16.859 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:16.859 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:16.859 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:16.859 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:16.859 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:16.859 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:16.859 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:16.859 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.859 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.859 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.859 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:16.859 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:16.859 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:16.859 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.859 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.859 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.859 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:16.859 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.859 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.859 [2024-11-18 11:45:42.710405] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:16.859 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.859 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:16.859 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.859 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.859 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.859 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:16.859 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.859 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.859 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.859 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:17.794 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:17.794 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:17.794 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:17.794 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:17.794 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:19.693 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:19.693 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:19.693 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:19.693 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:19.693 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:19.693 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:19.693 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:19.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:19.693 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:19.693 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:19.693 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:19.693 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:19.693 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:19.693 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:19.693 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:19.693 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:19.693 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.693 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.952 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.952 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:19.952 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.952 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.952 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.952 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:19.952 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:19.952 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.952 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.952 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.952 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:19.952 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.952 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.952 [2024-11-18 11:45:45.604960] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.952 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.952 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:19.952 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.952 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.952 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.952 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:19.952 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.952 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.952 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.952 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:20.518 11:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:20.518 11:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:20.518 11:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:20.518 11:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:20.518 11:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:23.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.045 [2024-11-18 11:45:48.572039] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.045 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:23.610 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:23.610 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:23.610 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:23.610 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:23.610 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:25.510 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:25.510 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:25.510 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:25.510 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:25.510 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:25.510 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:25.510 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:25.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:25.769 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:25.769 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:25.769 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:25.769 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:25.769 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:25.769 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:25.769 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:25.769 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:25.770 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.770 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.770 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.770 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:25.770 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.770 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.770 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.770 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:25.770 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:25.770 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.770 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.770 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.770 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:25.770 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.770 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.770 [2024-11-18 11:45:51.465356] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:25.770 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.770 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:25.770 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.770 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.770 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.770 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:25.770 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.770 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.770 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.770 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:26.337 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:26.337 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:26.337 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:26.337 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:26.337 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:28.237 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:28.237 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:28.237 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:28.237 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:28.237 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:28.237 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:28.237 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:28.496 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:28.496 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:28.496 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:28.496 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:28.496 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:28.497 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:28.497 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:28.497 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:28.497 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:28.497 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.497 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.497 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.497 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:28.497 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.497 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.497 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.497 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:28.497 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:28.497 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.497 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.497 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.497 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:28.497 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.497 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.497 [2024-11-18 11:45:54.292383] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:28.497 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.497 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:28.497 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.497 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.497 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.497 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:28.497 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.497 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.497 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.497 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:29.432 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:29.432 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:29.432 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:29.432 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:29.432 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:31.333 11:45:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:31.333 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.333 [2024-11-18 11:45:57.179463] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.333 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.592 [2024-11-18 11:45:57.227593] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.592 [2024-11-18 11:45:57.275761] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.592 [2024-11-18 11:45:57.323907] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.592 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.593 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:31.593 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.593 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.593 [2024-11-18 11:45:57.372086] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.593 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.593 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:31.593 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.593 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.593 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.593 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:31.593 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.593 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.593 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.593 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:31.593 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.593 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.593 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.593 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:31.593 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.593 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.593 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.593 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:31.593 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.593 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.593 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.593 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:31.593 "tick_rate": 2700000000, 00:17:31.593 "poll_groups": [ 00:17:31.593 { 00:17:31.593 "name": "nvmf_tgt_poll_group_000", 00:17:31.593 "admin_qpairs": 2, 00:17:31.593 "io_qpairs": 84, 00:17:31.593 "current_admin_qpairs": 0, 00:17:31.593 "current_io_qpairs": 0, 00:17:31.593 "pending_bdev_io": 0, 00:17:31.593 "completed_nvme_io": 135, 00:17:31.593 "transports": [ 00:17:31.593 { 00:17:31.593 "trtype": "TCP" 00:17:31.593 } 00:17:31.593 ] 00:17:31.593 }, 00:17:31.593 { 00:17:31.593 "name": "nvmf_tgt_poll_group_001", 00:17:31.593 "admin_qpairs": 2, 00:17:31.593 "io_qpairs": 84, 00:17:31.593 "current_admin_qpairs": 0, 00:17:31.593 "current_io_qpairs": 0, 00:17:31.593 "pending_bdev_io": 0, 00:17:31.593 "completed_nvme_io": 183, 00:17:31.593 "transports": [ 00:17:31.593 { 00:17:31.593 "trtype": "TCP" 00:17:31.593 } 00:17:31.593 ] 00:17:31.593 }, 00:17:31.593 { 00:17:31.593 "name": "nvmf_tgt_poll_group_002", 00:17:31.593 "admin_qpairs": 1, 00:17:31.593 "io_qpairs": 84, 00:17:31.593 "current_admin_qpairs": 0, 00:17:31.593 "current_io_qpairs": 0, 00:17:31.593 "pending_bdev_io": 0, 00:17:31.593 "completed_nvme_io": 185, 00:17:31.593 "transports": [ 00:17:31.593 { 00:17:31.593 "trtype": "TCP" 00:17:31.593 } 00:17:31.593 ] 00:17:31.593 }, 00:17:31.593 { 00:17:31.593 "name": "nvmf_tgt_poll_group_003", 00:17:31.593 "admin_qpairs": 2, 00:17:31.593 "io_qpairs": 84, 00:17:31.593 "current_admin_qpairs": 0, 00:17:31.593 "current_io_qpairs": 0, 00:17:31.593 "pending_bdev_io": 0, 00:17:31.593 "completed_nvme_io": 183, 00:17:31.593 "transports": [ 00:17:31.593 { 00:17:31.593 "trtype": "TCP" 00:17:31.593 } 00:17:31.593 ] 00:17:31.593 } 00:17:31.593 ] 00:17:31.593 }' 00:17:31.593 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:31.593 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:31.593 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:31.593 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:31.593 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:31.593 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:31.593 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:31.593 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:31.593 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:31.852 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:17:31.852 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:31.852 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:31.852 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:31.852 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:31.852 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:31.852 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:31.852 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:31.852 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:31.852 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:31.852 rmmod nvme_tcp 00:17:31.852 rmmod nvme_fabrics 00:17:31.852 rmmod nvme_keyring 00:17:31.852 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:31.852 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:31.852 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:31.852 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2941197 ']' 00:17:31.852 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2941197 00:17:31.852 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2941197 ']' 00:17:31.852 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2941197 00:17:31.852 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:17:31.852 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:31.852 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2941197 00:17:31.852 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:31.852 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:31.852 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2941197' 00:17:31.852 killing process with pid 2941197 00:17:31.852 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2941197 00:17:31.852 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2941197 00:17:33.228 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:33.228 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:33.228 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:33.228 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:33.228 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:17:33.228 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:33.228 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:17:33.228 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:33.228 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:33.228 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.228 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:33.228 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.184 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:35.184 00:17:35.184 real 0m27.925s 00:17:35.185 user 1m29.870s 00:17:35.185 sys 0m4.869s 00:17:35.185 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:35.185 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.185 ************************************ 00:17:35.185 END TEST nvmf_rpc 00:17:35.185 ************************************ 00:17:35.185 11:46:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:35.185 11:46:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:35.185 11:46:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:35.185 11:46:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:35.185 ************************************ 00:17:35.185 START TEST nvmf_invalid 00:17:35.185 ************************************ 00:17:35.185 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:35.185 * Looking for test storage... 00:17:35.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:35.185 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:35.185 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:17:35.185 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:35.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.444 --rc genhtml_branch_coverage=1 00:17:35.444 --rc genhtml_function_coverage=1 00:17:35.444 --rc genhtml_legend=1 00:17:35.444 --rc geninfo_all_blocks=1 00:17:35.444 --rc geninfo_unexecuted_blocks=1 00:17:35.444 00:17:35.444 ' 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:35.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.444 --rc genhtml_branch_coverage=1 00:17:35.444 --rc genhtml_function_coverage=1 00:17:35.444 --rc genhtml_legend=1 00:17:35.444 --rc geninfo_all_blocks=1 00:17:35.444 --rc geninfo_unexecuted_blocks=1 00:17:35.444 00:17:35.444 ' 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:35.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.444 --rc genhtml_branch_coverage=1 00:17:35.444 --rc genhtml_function_coverage=1 00:17:35.444 --rc genhtml_legend=1 00:17:35.444 --rc geninfo_all_blocks=1 00:17:35.444 --rc geninfo_unexecuted_blocks=1 00:17:35.444 00:17:35.444 ' 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:35.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.444 --rc genhtml_branch_coverage=1 00:17:35.444 --rc genhtml_function_coverage=1 00:17:35.444 --rc genhtml_legend=1 00:17:35.444 --rc geninfo_all_blocks=1 00:17:35.444 --rc geninfo_unexecuted_blocks=1 00:17:35.444 00:17:35.444 ' 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:35.444 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:35.445 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.445 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.445 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.445 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:35.445 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.445 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:35.445 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:35.445 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:35.445 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:35.445 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:35.445 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:35.445 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:35.445 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:35.445 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:35.445 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:35.445 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:35.445 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:35.445 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:35.445 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:35.445 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:35.445 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:35.445 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:35.445 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:35.445 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:35.445 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:35.445 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:35.445 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:35.445 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.445 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:35.445 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.445 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:35.445 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:35.445 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:35.445 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:37.343 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:37.343 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.343 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:37.344 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:37.344 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:37.344 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:37.602 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:37.602 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:37.602 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:37.602 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:37.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:37.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:17:37.602 00:17:37.602 --- 10.0.0.2 ping statistics --- 00:17:37.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.602 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:17:37.602 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:37.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:37.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:17:37.602 00:17:37.602 --- 10.0.0.1 ping statistics --- 00:17:37.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.602 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:17:37.602 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:37.602 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:17:37.602 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:37.602 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:37.602 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:37.602 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:37.602 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:37.602 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:37.602 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:37.602 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:37.602 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:37.602 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:37.602 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:37.602 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2945988 00:17:37.602 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:37.602 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2945988 00:17:37.602 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2945988 ']' 00:17:37.602 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.602 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:37.602 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.602 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:37.602 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:37.602 [2024-11-18 11:46:03.406455] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:37.602 [2024-11-18 11:46:03.406620] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.859 [2024-11-18 11:46:03.578902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:37.859 [2024-11-18 11:46:03.722544] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:37.859 [2024-11-18 11:46:03.722634] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:37.859 [2024-11-18 11:46:03.722661] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:37.859 [2024-11-18 11:46:03.722686] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:37.859 [2024-11-18 11:46:03.722706] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:37.859 [2024-11-18 11:46:03.725793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.859 [2024-11-18 11:46:03.725854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:37.859 [2024-11-18 11:46:03.729534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:37.859 [2024-11-18 11:46:03.729541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.793 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:38.793 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:17:38.793 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:38.793 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:38.793 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:38.793 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.793 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:38.793 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode20422 00:17:39.051 [2024-11-18 11:46:04.693385] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:39.051 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:39.051 { 00:17:39.051 "nqn": "nqn.2016-06.io.spdk:cnode20422", 00:17:39.051 "tgt_name": "foobar", 00:17:39.051 "method": "nvmf_create_subsystem", 00:17:39.051 "req_id": 1 00:17:39.051 } 00:17:39.051 Got JSON-RPC error response 00:17:39.051 response: 00:17:39.051 { 00:17:39.051 "code": -32603, 00:17:39.051 "message": "Unable to find target foobar" 00:17:39.051 }' 00:17:39.051 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:39.051 { 00:17:39.051 "nqn": "nqn.2016-06.io.spdk:cnode20422", 00:17:39.051 "tgt_name": "foobar", 00:17:39.051 "method": "nvmf_create_subsystem", 00:17:39.051 "req_id": 1 00:17:39.051 } 00:17:39.051 Got JSON-RPC error response 00:17:39.051 response: 00:17:39.051 { 00:17:39.051 "code": -32603, 00:17:39.051 "message": "Unable to find target foobar" 00:17:39.051 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:39.051 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:39.051 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode7144 00:17:39.309 [2024-11-18 11:46:05.022563] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7144: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:39.309 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:39.309 { 00:17:39.309 "nqn": "nqn.2016-06.io.spdk:cnode7144", 00:17:39.309 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:39.309 "method": "nvmf_create_subsystem", 00:17:39.309 "req_id": 1 00:17:39.309 } 00:17:39.309 Got JSON-RPC error response 00:17:39.309 response: 00:17:39.309 { 00:17:39.309 "code": -32602, 00:17:39.309 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:39.309 }' 00:17:39.309 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:39.309 { 00:17:39.309 "nqn": "nqn.2016-06.io.spdk:cnode7144", 00:17:39.309 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:39.309 "method": "nvmf_create_subsystem", 00:17:39.309 "req_id": 1 00:17:39.309 } 00:17:39.309 Got JSON-RPC error response 00:17:39.309 response: 00:17:39.309 { 00:17:39.309 "code": -32602, 00:17:39.309 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:39.309 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:39.309 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:39.309 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode3321 00:17:39.567 [2024-11-18 11:46:05.291488] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3321: invalid model number 'SPDK_Controller' 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:39.567 { 00:17:39.567 "nqn": "nqn.2016-06.io.spdk:cnode3321", 00:17:39.567 "model_number": "SPDK_Controller\u001f", 00:17:39.567 "method": "nvmf_create_subsystem", 00:17:39.567 "req_id": 1 00:17:39.567 } 00:17:39.567 Got JSON-RPC error response 00:17:39.567 response: 00:17:39.567 { 00:17:39.567 "code": -32602, 00:17:39.567 "message": "Invalid MN SPDK_Controller\u001f" 00:17:39.567 }' 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:39.567 { 00:17:39.567 "nqn": "nqn.2016-06.io.spdk:cnode3321", 00:17:39.567 "model_number": "SPDK_Controller\u001f", 00:17:39.567 "method": "nvmf_create_subsystem", 00:17:39.567 "req_id": 1 00:17:39.567 } 00:17:39.567 Got JSON-RPC error response 00:17:39.567 response: 00:17:39.567 { 00:17:39.567 "code": -32602, 00:17:39.567 "message": "Invalid MN SPDK_Controller\u001f" 00:17:39.567 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:39.567 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ! == \- ]] 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '!i??sf>%O1; \x^X@Pnt#' 00:17:39.568 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '!i??sf>%O1; \x^X@Pnt#' nqn.2016-06.io.spdk:cnode13782 00:17:39.826 [2024-11-18 11:46:05.632649] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13782: invalid serial number '!i??sf>%O1; \x^X@Pnt#' 00:17:39.826 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:39.826 { 00:17:39.826 "nqn": "nqn.2016-06.io.spdk:cnode13782", 00:17:39.826 "serial_number": "!i??sf>%O1; \\x^X@Pnt#", 00:17:39.826 "method": "nvmf_create_subsystem", 00:17:39.826 "req_id": 1 00:17:39.826 } 00:17:39.826 Got JSON-RPC error response 00:17:39.826 response: 00:17:39.826 { 00:17:39.826 "code": -32602, 00:17:39.826 "message": "Invalid SN !i??sf>%O1; \\x^X@Pnt#" 00:17:39.826 }' 00:17:39.826 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:39.826 { 00:17:39.826 "nqn": "nqn.2016-06.io.spdk:cnode13782", 00:17:39.826 "serial_number": "!i??sf>%O1; \\x^X@Pnt#", 00:17:39.826 "method": "nvmf_create_subsystem", 00:17:39.826 "req_id": 1 00:17:39.826 } 00:17:39.826 Got JSON-RPC error response 00:17:39.826 response: 00:17:39.826 { 00:17:39.826 "code": -32602, 00:17:39.826 "message": "Invalid SN !i??sf>%O1; \\x^X@Pnt#" 00:17:39.826 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:39.826 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:39.826 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:39.826 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:39.826 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:39.826 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:39.826 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:39.826 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.826 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:17:39.826 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:17:39.826 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:17:39.826 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.826 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.826 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:39.826 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:39.826 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:39.826 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.826 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.826 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:39.826 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:39.827 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:17:40.085 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:17:40.085 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:17:40.085 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.085 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.085 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:40.085 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:40.085 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:40.085 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.085 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.085 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:17:40.085 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:17:40.085 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:17:40.085 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.085 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.085 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:40.085 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:40.085 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:40.085 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.085 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.085 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:40.085 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:40.085 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:40.085 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.085 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.085 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:17:40.085 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ b == \- ]] 00:17:40.086 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'b.*j

/dev/null' 00:17:44.042 11:46:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:45.947 00:17:45.947 real 0m10.671s 00:17:45.947 user 0m26.968s 00:17:45.947 sys 0m2.667s 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:45.947 ************************************ 00:17:45.947 END TEST nvmf_invalid 00:17:45.947 ************************************ 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:45.947 ************************************ 00:17:45.947 START TEST nvmf_connect_stress 00:17:45.947 ************************************ 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:45.947 * Looking for test storage... 00:17:45.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:45.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.947 --rc genhtml_branch_coverage=1 00:17:45.947 --rc genhtml_function_coverage=1 00:17:45.947 --rc genhtml_legend=1 00:17:45.947 --rc geninfo_all_blocks=1 00:17:45.947 --rc geninfo_unexecuted_blocks=1 00:17:45.947 00:17:45.947 ' 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:45.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.947 --rc genhtml_branch_coverage=1 00:17:45.947 --rc genhtml_function_coverage=1 00:17:45.947 --rc genhtml_legend=1 00:17:45.947 --rc geninfo_all_blocks=1 00:17:45.947 --rc geninfo_unexecuted_blocks=1 00:17:45.947 00:17:45.947 ' 00:17:45.947 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:45.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.947 --rc genhtml_branch_coverage=1 00:17:45.948 --rc genhtml_function_coverage=1 00:17:45.948 --rc genhtml_legend=1 00:17:45.948 --rc geninfo_all_blocks=1 00:17:45.948 --rc geninfo_unexecuted_blocks=1 00:17:45.948 00:17:45.948 ' 00:17:45.948 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:45.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.948 --rc genhtml_branch_coverage=1 00:17:45.948 --rc genhtml_function_coverage=1 00:17:45.948 --rc genhtml_legend=1 00:17:45.948 --rc geninfo_all_blocks=1 00:17:45.948 --rc geninfo_unexecuted_blocks=1 00:17:45.948 00:17:45.948 ' 00:17:45.948 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:45.948 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:45.948 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:45.948 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:45.948 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:45.948 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:45.948 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:45.948 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:45.948 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:45.948 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:45.948 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:45.948 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:45.948 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:45.948 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:45.948 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:45.948 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:45.948 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:45.948 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:45.948 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:45.948 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:45.948 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:45.948 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:45.948 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:45.948 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.948 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.948 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.948 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:45.948 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.948 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:45.948 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:45.948 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:45.948 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:45.948 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:46.207 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:46.207 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:46.207 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:46.207 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:46.207 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:46.207 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:46.207 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:46.207 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:46.207 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:46.207 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:46.207 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:46.207 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:46.207 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.207 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:46.207 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.207 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:46.207 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:46.207 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:46.207 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.111 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:48.111 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:48.111 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:48.111 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:48.111 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:48.111 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:48.111 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:48.111 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:48.111 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:48.111 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:48.111 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:48.111 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:48.111 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:48.111 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:48.111 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:48.111 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:48.111 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:48.111 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:48.111 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:48.111 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:48.111 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:48.111 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:48.111 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:48.111 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:48.111 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:48.111 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:48.111 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:48.111 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:48.111 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:48.111 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:48.111 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:48.111 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:48.111 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:48.111 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:48.111 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:48.111 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:48.111 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:48.112 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:48.112 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:48.112 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:48.112 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:48.371 11:46:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:48.371 11:46:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:48.371 11:46:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:48.371 11:46:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:48.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:48.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:17:48.371 00:17:48.371 --- 10.0.0.2 ping statistics --- 00:17:48.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.371 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:17:48.371 11:46:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:48.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:48.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:17:48.371 00:17:48.371 --- 10.0.0.1 ping statistics --- 00:17:48.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.371 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:17:48.371 11:46:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:48.371 11:46:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:17:48.371 11:46:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:48.371 11:46:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:48.371 11:46:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:48.371 11:46:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:48.371 11:46:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:48.372 11:46:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:48.372 11:46:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:48.372 11:46:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:48.372 11:46:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:48.372 11:46:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:48.372 11:46:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.372 11:46:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2948850 00:17:48.372 11:46:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:48.372 11:46:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2948850 00:17:48.372 11:46:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2948850 ']' 00:17:48.372 11:46:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.372 11:46:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:48.372 11:46:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.372 11:46:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:48.372 11:46:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.372 [2024-11-18 11:46:14.167531] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:48.372 [2024-11-18 11:46:14.167678] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:48.630 [2024-11-18 11:46:14.318466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:48.630 [2024-11-18 11:46:14.457133] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:48.630 [2024-11-18 11:46:14.457213] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:48.630 [2024-11-18 11:46:14.457238] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:48.630 [2024-11-18 11:46:14.457262] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:48.630 [2024-11-18 11:46:14.457282] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:48.630 [2024-11-18 11:46:14.459977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:48.630 [2024-11-18 11:46:14.460033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.630 [2024-11-18 11:46:14.460038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.564 [2024-11-18 11:46:15.157068] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.564 [2024-11-18 11:46:15.177324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.564 NULL1 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2949011 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.564 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.565 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.565 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.565 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.565 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.565 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.565 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.565 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.565 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.565 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.565 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.565 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.565 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.565 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.565 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.565 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.565 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.565 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.565 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.565 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.565 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.565 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.565 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.565 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.565 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.565 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:49.565 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:49.565 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949011 00:17:49.565 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.565 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.565 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.823 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.823 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949011 00:17:49.823 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.823 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.823 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.081 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.081 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949011 00:17:50.081 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.081 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.081 11:46:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.339 11:46:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.597 11:46:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949011 00:17:50.597 11:46:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.597 11:46:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.597 11:46:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.854 11:46:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.854 11:46:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949011 00:17:50.854 11:46:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.854 11:46:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.854 11:46:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.112 11:46:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.112 11:46:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949011 00:17:51.112 11:46:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.112 11:46:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.112 11:46:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.370 11:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.370 11:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949011 00:17:51.370 11:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.370 11:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.370 11:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.936 11:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.936 11:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949011 00:17:51.936 11:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.936 11:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.936 11:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.194 11:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.194 11:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949011 00:17:52.194 11:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.194 11:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.194 11:46:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.452 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.452 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949011 00:17:52.452 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.452 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.452 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.710 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.710 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949011 00:17:52.710 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.710 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.710 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.969 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.969 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949011 00:17:52.969 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.969 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.969 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.534 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.534 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949011 00:17:53.534 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.534 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.534 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.791 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.791 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949011 00:17:53.791 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.791 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.791 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:54.050 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.050 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949011 00:17:54.050 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:54.050 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.050 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:54.308 11:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.308 11:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949011 00:17:54.308 11:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:54.308 11:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.308 11:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:54.566 11:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.566 11:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949011 00:17:54.566 11:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:54.566 11:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.566 11:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.134 11:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.134 11:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949011 00:17:55.134 11:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:55.134 11:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.134 11:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.392 11:46:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.392 11:46:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949011 00:17:55.392 11:46:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:55.392 11:46:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.392 11:46:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.650 11:46:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.650 11:46:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949011 00:17:55.650 11:46:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:55.650 11:46:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.650 11:46:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.908 11:46:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.908 11:46:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949011 00:17:55.908 11:46:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:55.908 11:46:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.908 11:46:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.474 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.474 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949011 00:17:56.474 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:56.474 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.474 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.732 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.732 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949011 00:17:56.732 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:56.732 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.732 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.988 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.988 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949011 00:17:56.988 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:56.988 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.988 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:57.246 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.246 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949011 00:17:57.246 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:57.246 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.246 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:57.504 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.504 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949011 00:17:57.504 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:57.504 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.504 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:58.071 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.071 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949011 00:17:58.071 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:58.071 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.071 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:58.328 11:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.328 11:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949011 00:17:58.328 11:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:58.329 11:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.329 11:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:58.587 11:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.587 11:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949011 00:17:58.587 11:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:58.587 11:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.587 11:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:58.845 11:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.845 11:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949011 00:17:58.845 11:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:58.845 11:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.845 11:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.102 11:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.102 11:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949011 00:17:59.102 11:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:59.102 11:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.102 11:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.668 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.668 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949011 00:17:59.668 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:59.668 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.668 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.668 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:59.926 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.926 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2949011 00:17:59.926 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2949011) - No such process 00:17:59.926 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2949011 00:17:59.926 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:59.926 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:59.926 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:59.926 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:59.926 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:59.926 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:59.926 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:59.926 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:59.926 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:59.926 rmmod nvme_tcp 00:17:59.926 rmmod nvme_fabrics 00:17:59.926 rmmod nvme_keyring 00:17:59.926 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:59.926 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:59.926 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:59.926 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2948850 ']' 00:17:59.926 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2948850 00:17:59.926 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2948850 ']' 00:17:59.926 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2948850 00:17:59.927 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:17:59.927 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.927 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2948850 00:17:59.927 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:59.927 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:59.927 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2948850' 00:17:59.927 killing process with pid 2948850 00:17:59.927 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2948850 00:17:59.927 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2948850 00:18:01.302 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:01.302 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:01.302 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:01.302 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:18:01.302 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:18:01.302 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:01.302 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:18:01.302 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:01.302 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:01.302 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.302 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:01.302 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.206 11:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:03.206 00:18:03.206 real 0m17.185s 00:18:03.206 user 0m42.754s 00:18:03.206 sys 0m6.057s 00:18:03.206 11:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:03.206 11:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:03.206 ************************************ 00:18:03.206 END TEST nvmf_connect_stress 00:18:03.206 ************************************ 00:18:03.206 11:46:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:03.206 11:46:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:03.206 11:46:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:03.206 11:46:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:03.206 ************************************ 00:18:03.206 START TEST nvmf_fused_ordering 00:18:03.206 ************************************ 00:18:03.206 11:46:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:03.206 * Looking for test storage... 00:18:03.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:03.206 11:46:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:03.206 11:46:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:18:03.206 11:46:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:03.206 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:03.206 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:03.206 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:03.206 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:03.206 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:18:03.206 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:18:03.206 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:18:03.206 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:18:03.206 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:18:03.206 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:18:03.206 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:18:03.206 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:03.206 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:18:03.206 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:18:03.206 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:03.206 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:03.206 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:18:03.206 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:18:03.206 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:03.206 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:18:03.206 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:18:03.206 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:18:03.206 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:18:03.206 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:03.206 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:18:03.206 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:18:03.207 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:03.207 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:03.207 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:18:03.207 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:03.207 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:03.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.207 --rc genhtml_branch_coverage=1 00:18:03.207 --rc genhtml_function_coverage=1 00:18:03.207 --rc genhtml_legend=1 00:18:03.207 --rc geninfo_all_blocks=1 00:18:03.207 --rc geninfo_unexecuted_blocks=1 00:18:03.207 00:18:03.207 ' 00:18:03.207 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:03.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.207 --rc genhtml_branch_coverage=1 00:18:03.207 --rc genhtml_function_coverage=1 00:18:03.207 --rc genhtml_legend=1 00:18:03.207 --rc geninfo_all_blocks=1 00:18:03.207 --rc geninfo_unexecuted_blocks=1 00:18:03.207 00:18:03.207 ' 00:18:03.207 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:03.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.207 --rc genhtml_branch_coverage=1 00:18:03.207 --rc genhtml_function_coverage=1 00:18:03.207 --rc genhtml_legend=1 00:18:03.207 --rc geninfo_all_blocks=1 00:18:03.207 --rc geninfo_unexecuted_blocks=1 00:18:03.207 00:18:03.207 ' 00:18:03.207 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:03.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.207 --rc genhtml_branch_coverage=1 00:18:03.207 --rc genhtml_function_coverage=1 00:18:03.207 --rc genhtml_legend=1 00:18:03.207 --rc geninfo_all_blocks=1 00:18:03.207 --rc geninfo_unexecuted_blocks=1 00:18:03.207 00:18:03.207 ' 00:18:03.207 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:03.207 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:18:03.207 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:03.207 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:03.207 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:03.207 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:03.207 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:03.207 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:03.207 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:03.207 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:03.207 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:03.207 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:03.207 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:03.207 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:03.207 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:03.207 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:03.207 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:03.207 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:03.207 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:03.207 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:18:03.466 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:03.466 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:03.466 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:03.466 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.466 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.466 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.466 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:18:03.466 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.466 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:18:03.466 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:03.466 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:03.466 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:03.466 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:03.466 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:03.466 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:03.466 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:03.466 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:03.466 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:03.466 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:03.466 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:18:03.466 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:03.466 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:03.466 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:03.466 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:03.466 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:03.466 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.466 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:03.466 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.466 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:03.466 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:03.466 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:18:03.466 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:05.366 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:05.366 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:05.366 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:05.366 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:05.366 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:05.367 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:05.367 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:05.367 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:05.367 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:05.367 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:05.367 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:05.367 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:05.367 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:05.367 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:05.367 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:05.367 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:05.367 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:05.367 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:05.367 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:05.367 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:05.367 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:05.625 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:05.625 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:05.625 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:05.625 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:05.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:05.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:18:05.625 00:18:05.625 --- 10.0.0.2 ping statistics --- 00:18:05.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.625 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:18:05.625 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:05.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:05.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:18:05.625 00:18:05.625 --- 10.0.0.1 ping statistics --- 00:18:05.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.625 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:18:05.625 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:05.625 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:18:05.625 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:05.625 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:05.625 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:05.625 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:05.625 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:05.625 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:05.625 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:05.625 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:05.625 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:05.625 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:05.625 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:05.625 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2952286 00:18:05.625 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:05.625 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2952286 00:18:05.625 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2952286 ']' 00:18:05.625 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.625 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:05.625 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.625 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:05.625 11:46:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:05.625 [2024-11-18 11:46:31.393724] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:05.625 [2024-11-18 11:46:31.393889] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.884 [2024-11-18 11:46:31.546192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.884 [2024-11-18 11:46:31.684511] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:05.884 [2024-11-18 11:46:31.684619] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:05.884 [2024-11-18 11:46:31.684645] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:05.884 [2024-11-18 11:46:31.684671] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:05.884 [2024-11-18 11:46:31.684690] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:05.884 [2024-11-18 11:46:31.686339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.918 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.918 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:18:06.918 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:06.918 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:06.918 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:06.918 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.918 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:06.918 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.918 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:06.918 [2024-11-18 11:46:32.433324] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:06.918 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.918 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:06.918 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.919 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:06.919 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.919 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:06.919 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.919 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:06.919 [2024-11-18 11:46:32.449640] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:06.919 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.919 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:06.919 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.919 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:06.919 NULL1 00:18:06.919 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.919 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:06.919 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.919 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:06.919 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.919 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:06.919 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.919 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:06.919 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.919 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:06.919 [2024-11-18 11:46:32.519928] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:06.919 [2024-11-18 11:46:32.520016] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2952450 ] 00:18:07.488 Attached to nqn.2016-06.io.spdk:cnode1 00:18:07.488 Namespace ID: 1 size: 1GB 00:18:07.488 fused_ordering(0) 00:18:07.488 fused_ordering(1) 00:18:07.488 fused_ordering(2) 00:18:07.488 fused_ordering(3) 00:18:07.488 fused_ordering(4) 00:18:07.488 fused_ordering(5) 00:18:07.488 fused_ordering(6) 00:18:07.488 fused_ordering(7) 00:18:07.488 fused_ordering(8) 00:18:07.488 fused_ordering(9) 00:18:07.488 fused_ordering(10) 00:18:07.488 fused_ordering(11) 00:18:07.488 fused_ordering(12) 00:18:07.488 fused_ordering(13) 00:18:07.488 fused_ordering(14) 00:18:07.488 fused_ordering(15) 00:18:07.488 fused_ordering(16) 00:18:07.488 fused_ordering(17) 00:18:07.488 fused_ordering(18) 00:18:07.488 fused_ordering(19) 00:18:07.488 fused_ordering(20) 00:18:07.488 fused_ordering(21) 00:18:07.488 fused_ordering(22) 00:18:07.488 fused_ordering(23) 00:18:07.488 fused_ordering(24) 00:18:07.488 fused_ordering(25) 00:18:07.488 fused_ordering(26) 00:18:07.488 fused_ordering(27) 00:18:07.488 fused_ordering(28) 00:18:07.488 fused_ordering(29) 00:18:07.488 fused_ordering(30) 00:18:07.488 fused_ordering(31) 00:18:07.488 fused_ordering(32) 00:18:07.488 fused_ordering(33) 00:18:07.488 fused_ordering(34) 00:18:07.488 fused_ordering(35) 00:18:07.488 fused_ordering(36) 00:18:07.488 fused_ordering(37) 00:18:07.488 fused_ordering(38) 00:18:07.488 fused_ordering(39) 00:18:07.488 fused_ordering(40) 00:18:07.488 fused_ordering(41) 00:18:07.488 fused_ordering(42) 00:18:07.488 fused_ordering(43) 00:18:07.488 fused_ordering(44) 00:18:07.488 fused_ordering(45) 00:18:07.488 fused_ordering(46) 00:18:07.488 fused_ordering(47) 00:18:07.488 fused_ordering(48) 00:18:07.488 fused_ordering(49) 00:18:07.488 fused_ordering(50) 00:18:07.488 fused_ordering(51) 00:18:07.488 fused_ordering(52) 00:18:07.488 fused_ordering(53) 00:18:07.488 fused_ordering(54) 00:18:07.488 fused_ordering(55) 00:18:07.488 fused_ordering(56) 00:18:07.488 fused_ordering(57) 00:18:07.488 fused_ordering(58) 00:18:07.488 fused_ordering(59) 00:18:07.488 fused_ordering(60) 00:18:07.488 fused_ordering(61) 00:18:07.488 fused_ordering(62) 00:18:07.488 fused_ordering(63) 00:18:07.488 fused_ordering(64) 00:18:07.488 fused_ordering(65) 00:18:07.488 fused_ordering(66) 00:18:07.488 fused_ordering(67) 00:18:07.488 fused_ordering(68) 00:18:07.488 fused_ordering(69) 00:18:07.488 fused_ordering(70) 00:18:07.488 fused_ordering(71) 00:18:07.488 fused_ordering(72) 00:18:07.488 fused_ordering(73) 00:18:07.488 fused_ordering(74) 00:18:07.488 fused_ordering(75) 00:18:07.488 fused_ordering(76) 00:18:07.488 fused_ordering(77) 00:18:07.488 fused_ordering(78) 00:18:07.488 fused_ordering(79) 00:18:07.488 fused_ordering(80) 00:18:07.488 fused_ordering(81) 00:18:07.488 fused_ordering(82) 00:18:07.488 fused_ordering(83) 00:18:07.488 fused_ordering(84) 00:18:07.488 fused_ordering(85) 00:18:07.488 fused_ordering(86) 00:18:07.488 fused_ordering(87) 00:18:07.488 fused_ordering(88) 00:18:07.488 fused_ordering(89) 00:18:07.488 fused_ordering(90) 00:18:07.488 fused_ordering(91) 00:18:07.488 fused_ordering(92) 00:18:07.488 fused_ordering(93) 00:18:07.488 fused_ordering(94) 00:18:07.488 fused_ordering(95) 00:18:07.488 fused_ordering(96) 00:18:07.488 fused_ordering(97) 00:18:07.488 fused_ordering(98) 00:18:07.488 fused_ordering(99) 00:18:07.488 fused_ordering(100) 00:18:07.488 fused_ordering(101) 00:18:07.488 fused_ordering(102) 00:18:07.488 fused_ordering(103) 00:18:07.488 fused_ordering(104) 00:18:07.488 fused_ordering(105) 00:18:07.488 fused_ordering(106) 00:18:07.488 fused_ordering(107) 00:18:07.488 fused_ordering(108) 00:18:07.488 fused_ordering(109) 00:18:07.488 fused_ordering(110) 00:18:07.488 fused_ordering(111) 00:18:07.488 fused_ordering(112) 00:18:07.488 fused_ordering(113) 00:18:07.488 fused_ordering(114) 00:18:07.488 fused_ordering(115) 00:18:07.488 fused_ordering(116) 00:18:07.488 fused_ordering(117) 00:18:07.488 fused_ordering(118) 00:18:07.488 fused_ordering(119) 00:18:07.488 fused_ordering(120) 00:18:07.488 fused_ordering(121) 00:18:07.488 fused_ordering(122) 00:18:07.488 fused_ordering(123) 00:18:07.488 fused_ordering(124) 00:18:07.488 fused_ordering(125) 00:18:07.488 fused_ordering(126) 00:18:07.488 fused_ordering(127) 00:18:07.488 fused_ordering(128) 00:18:07.488 fused_ordering(129) 00:18:07.488 fused_ordering(130) 00:18:07.488 fused_ordering(131) 00:18:07.488 fused_ordering(132) 00:18:07.488 fused_ordering(133) 00:18:07.488 fused_ordering(134) 00:18:07.488 fused_ordering(135) 00:18:07.488 fused_ordering(136) 00:18:07.488 fused_ordering(137) 00:18:07.488 fused_ordering(138) 00:18:07.488 fused_ordering(139) 00:18:07.488 fused_ordering(140) 00:18:07.488 fused_ordering(141) 00:18:07.488 fused_ordering(142) 00:18:07.488 fused_ordering(143) 00:18:07.488 fused_ordering(144) 00:18:07.488 fused_ordering(145) 00:18:07.488 fused_ordering(146) 00:18:07.488 fused_ordering(147) 00:18:07.488 fused_ordering(148) 00:18:07.488 fused_ordering(149) 00:18:07.488 fused_ordering(150) 00:18:07.488 fused_ordering(151) 00:18:07.488 fused_ordering(152) 00:18:07.488 fused_ordering(153) 00:18:07.488 fused_ordering(154) 00:18:07.488 fused_ordering(155) 00:18:07.488 fused_ordering(156) 00:18:07.488 fused_ordering(157) 00:18:07.488 fused_ordering(158) 00:18:07.488 fused_ordering(159) 00:18:07.488 fused_ordering(160) 00:18:07.488 fused_ordering(161) 00:18:07.488 fused_ordering(162) 00:18:07.488 fused_ordering(163) 00:18:07.488 fused_ordering(164) 00:18:07.488 fused_ordering(165) 00:18:07.488 fused_ordering(166) 00:18:07.488 fused_ordering(167) 00:18:07.488 fused_ordering(168) 00:18:07.488 fused_ordering(169) 00:18:07.488 fused_ordering(170) 00:18:07.488 fused_ordering(171) 00:18:07.488 fused_ordering(172) 00:18:07.488 fused_ordering(173) 00:18:07.488 fused_ordering(174) 00:18:07.488 fused_ordering(175) 00:18:07.488 fused_ordering(176) 00:18:07.488 fused_ordering(177) 00:18:07.488 fused_ordering(178) 00:18:07.488 fused_ordering(179) 00:18:07.488 fused_ordering(180) 00:18:07.488 fused_ordering(181) 00:18:07.488 fused_ordering(182) 00:18:07.488 fused_ordering(183) 00:18:07.488 fused_ordering(184) 00:18:07.488 fused_ordering(185) 00:18:07.488 fused_ordering(186) 00:18:07.488 fused_ordering(187) 00:18:07.488 fused_ordering(188) 00:18:07.488 fused_ordering(189) 00:18:07.488 fused_ordering(190) 00:18:07.488 fused_ordering(191) 00:18:07.488 fused_ordering(192) 00:18:07.488 fused_ordering(193) 00:18:07.488 fused_ordering(194) 00:18:07.488 fused_ordering(195) 00:18:07.488 fused_ordering(196) 00:18:07.488 fused_ordering(197) 00:18:07.488 fused_ordering(198) 00:18:07.488 fused_ordering(199) 00:18:07.488 fused_ordering(200) 00:18:07.488 fused_ordering(201) 00:18:07.488 fused_ordering(202) 00:18:07.488 fused_ordering(203) 00:18:07.488 fused_ordering(204) 00:18:07.488 fused_ordering(205) 00:18:08.057 fused_ordering(206) 00:18:08.057 fused_ordering(207) 00:18:08.057 fused_ordering(208) 00:18:08.057 fused_ordering(209) 00:18:08.057 fused_ordering(210) 00:18:08.057 fused_ordering(211) 00:18:08.057 fused_ordering(212) 00:18:08.057 fused_ordering(213) 00:18:08.057 fused_ordering(214) 00:18:08.057 fused_ordering(215) 00:18:08.057 fused_ordering(216) 00:18:08.057 fused_ordering(217) 00:18:08.057 fused_ordering(218) 00:18:08.057 fused_ordering(219) 00:18:08.057 fused_ordering(220) 00:18:08.057 fused_ordering(221) 00:18:08.057 fused_ordering(222) 00:18:08.057 fused_ordering(223) 00:18:08.057 fused_ordering(224) 00:18:08.057 fused_ordering(225) 00:18:08.057 fused_ordering(226) 00:18:08.057 fused_ordering(227) 00:18:08.057 fused_ordering(228) 00:18:08.057 fused_ordering(229) 00:18:08.057 fused_ordering(230) 00:18:08.057 fused_ordering(231) 00:18:08.057 fused_ordering(232) 00:18:08.057 fused_ordering(233) 00:18:08.057 fused_ordering(234) 00:18:08.057 fused_ordering(235) 00:18:08.057 fused_ordering(236) 00:18:08.057 fused_ordering(237) 00:18:08.057 fused_ordering(238) 00:18:08.057 fused_ordering(239) 00:18:08.057 fused_ordering(240) 00:18:08.057 fused_ordering(241) 00:18:08.057 fused_ordering(242) 00:18:08.057 fused_ordering(243) 00:18:08.057 fused_ordering(244) 00:18:08.057 fused_ordering(245) 00:18:08.057 fused_ordering(246) 00:18:08.057 fused_ordering(247) 00:18:08.057 fused_ordering(248) 00:18:08.057 fused_ordering(249) 00:18:08.057 fused_ordering(250) 00:18:08.057 fused_ordering(251) 00:18:08.057 fused_ordering(252) 00:18:08.057 fused_ordering(253) 00:18:08.057 fused_ordering(254) 00:18:08.057 fused_ordering(255) 00:18:08.057 fused_ordering(256) 00:18:08.057 fused_ordering(257) 00:18:08.057 fused_ordering(258) 00:18:08.057 fused_ordering(259) 00:18:08.057 fused_ordering(260) 00:18:08.057 fused_ordering(261) 00:18:08.057 fused_ordering(262) 00:18:08.057 fused_ordering(263) 00:18:08.057 fused_ordering(264) 00:18:08.057 fused_ordering(265) 00:18:08.057 fused_ordering(266) 00:18:08.057 fused_ordering(267) 00:18:08.057 fused_ordering(268) 00:18:08.057 fused_ordering(269) 00:18:08.057 fused_ordering(270) 00:18:08.057 fused_ordering(271) 00:18:08.057 fused_ordering(272) 00:18:08.057 fused_ordering(273) 00:18:08.057 fused_ordering(274) 00:18:08.057 fused_ordering(275) 00:18:08.057 fused_ordering(276) 00:18:08.057 fused_ordering(277) 00:18:08.057 fused_ordering(278) 00:18:08.057 fused_ordering(279) 00:18:08.057 fused_ordering(280) 00:18:08.057 fused_ordering(281) 00:18:08.057 fused_ordering(282) 00:18:08.057 fused_ordering(283) 00:18:08.057 fused_ordering(284) 00:18:08.057 fused_ordering(285) 00:18:08.057 fused_ordering(286) 00:18:08.057 fused_ordering(287) 00:18:08.057 fused_ordering(288) 00:18:08.057 fused_ordering(289) 00:18:08.057 fused_ordering(290) 00:18:08.057 fused_ordering(291) 00:18:08.057 fused_ordering(292) 00:18:08.057 fused_ordering(293) 00:18:08.057 fused_ordering(294) 00:18:08.057 fused_ordering(295) 00:18:08.057 fused_ordering(296) 00:18:08.057 fused_ordering(297) 00:18:08.057 fused_ordering(298) 00:18:08.057 fused_ordering(299) 00:18:08.057 fused_ordering(300) 00:18:08.057 fused_ordering(301) 00:18:08.057 fused_ordering(302) 00:18:08.057 fused_ordering(303) 00:18:08.057 fused_ordering(304) 00:18:08.057 fused_ordering(305) 00:18:08.057 fused_ordering(306) 00:18:08.057 fused_ordering(307) 00:18:08.057 fused_ordering(308) 00:18:08.057 fused_ordering(309) 00:18:08.057 fused_ordering(310) 00:18:08.057 fused_ordering(311) 00:18:08.057 fused_ordering(312) 00:18:08.057 fused_ordering(313) 00:18:08.057 fused_ordering(314) 00:18:08.057 fused_ordering(315) 00:18:08.057 fused_ordering(316) 00:18:08.057 fused_ordering(317) 00:18:08.058 fused_ordering(318) 00:18:08.058 fused_ordering(319) 00:18:08.058 fused_ordering(320) 00:18:08.058 fused_ordering(321) 00:18:08.058 fused_ordering(322) 00:18:08.058 fused_ordering(323) 00:18:08.058 fused_ordering(324) 00:18:08.058 fused_ordering(325) 00:18:08.058 fused_ordering(326) 00:18:08.058 fused_ordering(327) 00:18:08.058 fused_ordering(328) 00:18:08.058 fused_ordering(329) 00:18:08.058 fused_ordering(330) 00:18:08.058 fused_ordering(331) 00:18:08.058 fused_ordering(332) 00:18:08.058 fused_ordering(333) 00:18:08.058 fused_ordering(334) 00:18:08.058 fused_ordering(335) 00:18:08.058 fused_ordering(336) 00:18:08.058 fused_ordering(337) 00:18:08.058 fused_ordering(338) 00:18:08.058 fused_ordering(339) 00:18:08.058 fused_ordering(340) 00:18:08.058 fused_ordering(341) 00:18:08.058 fused_ordering(342) 00:18:08.058 fused_ordering(343) 00:18:08.058 fused_ordering(344) 00:18:08.058 fused_ordering(345) 00:18:08.058 fused_ordering(346) 00:18:08.058 fused_ordering(347) 00:18:08.058 fused_ordering(348) 00:18:08.058 fused_ordering(349) 00:18:08.058 fused_ordering(350) 00:18:08.058 fused_ordering(351) 00:18:08.058 fused_ordering(352) 00:18:08.058 fused_ordering(353) 00:18:08.058 fused_ordering(354) 00:18:08.058 fused_ordering(355) 00:18:08.058 fused_ordering(356) 00:18:08.058 fused_ordering(357) 00:18:08.058 fused_ordering(358) 00:18:08.058 fused_ordering(359) 00:18:08.058 fused_ordering(360) 00:18:08.058 fused_ordering(361) 00:18:08.058 fused_ordering(362) 00:18:08.058 fused_ordering(363) 00:18:08.058 fused_ordering(364) 00:18:08.058 fused_ordering(365) 00:18:08.058 fused_ordering(366) 00:18:08.058 fused_ordering(367) 00:18:08.058 fused_ordering(368) 00:18:08.058 fused_ordering(369) 00:18:08.058 fused_ordering(370) 00:18:08.058 fused_ordering(371) 00:18:08.058 fused_ordering(372) 00:18:08.058 fused_ordering(373) 00:18:08.058 fused_ordering(374) 00:18:08.058 fused_ordering(375) 00:18:08.058 fused_ordering(376) 00:18:08.058 fused_ordering(377) 00:18:08.058 fused_ordering(378) 00:18:08.058 fused_ordering(379) 00:18:08.058 fused_ordering(380) 00:18:08.058 fused_ordering(381) 00:18:08.058 fused_ordering(382) 00:18:08.058 fused_ordering(383) 00:18:08.058 fused_ordering(384) 00:18:08.058 fused_ordering(385) 00:18:08.058 fused_ordering(386) 00:18:08.058 fused_ordering(387) 00:18:08.058 fused_ordering(388) 00:18:08.058 fused_ordering(389) 00:18:08.058 fused_ordering(390) 00:18:08.058 fused_ordering(391) 00:18:08.058 fused_ordering(392) 00:18:08.058 fused_ordering(393) 00:18:08.058 fused_ordering(394) 00:18:08.058 fused_ordering(395) 00:18:08.058 fused_ordering(396) 00:18:08.058 fused_ordering(397) 00:18:08.058 fused_ordering(398) 00:18:08.058 fused_ordering(399) 00:18:08.058 fused_ordering(400) 00:18:08.058 fused_ordering(401) 00:18:08.058 fused_ordering(402) 00:18:08.058 fused_ordering(403) 00:18:08.058 fused_ordering(404) 00:18:08.058 fused_ordering(405) 00:18:08.058 fused_ordering(406) 00:18:08.058 fused_ordering(407) 00:18:08.058 fused_ordering(408) 00:18:08.058 fused_ordering(409) 00:18:08.058 fused_ordering(410) 00:18:08.627 fused_ordering(411) 00:18:08.627 fused_ordering(412) 00:18:08.627 fused_ordering(413) 00:18:08.627 fused_ordering(414) 00:18:08.627 fused_ordering(415) 00:18:08.628 fused_ordering(416) 00:18:08.628 fused_ordering(417) 00:18:08.628 fused_ordering(418) 00:18:08.628 fused_ordering(419) 00:18:08.628 fused_ordering(420) 00:18:08.628 fused_ordering(421) 00:18:08.628 fused_ordering(422) 00:18:08.628 fused_ordering(423) 00:18:08.628 fused_ordering(424) 00:18:08.628 fused_ordering(425) 00:18:08.628 fused_ordering(426) 00:18:08.628 fused_ordering(427) 00:18:08.628 fused_ordering(428) 00:18:08.628 fused_ordering(429) 00:18:08.628 fused_ordering(430) 00:18:08.628 fused_ordering(431) 00:18:08.628 fused_ordering(432) 00:18:08.628 fused_ordering(433) 00:18:08.628 fused_ordering(434) 00:18:08.628 fused_ordering(435) 00:18:08.628 fused_ordering(436) 00:18:08.628 fused_ordering(437) 00:18:08.628 fused_ordering(438) 00:18:08.628 fused_ordering(439) 00:18:08.628 fused_ordering(440) 00:18:08.628 fused_ordering(441) 00:18:08.628 fused_ordering(442) 00:18:08.628 fused_ordering(443) 00:18:08.628 fused_ordering(444) 00:18:08.628 fused_ordering(445) 00:18:08.628 fused_ordering(446) 00:18:08.628 fused_ordering(447) 00:18:08.628 fused_ordering(448) 00:18:08.628 fused_ordering(449) 00:18:08.628 fused_ordering(450) 00:18:08.628 fused_ordering(451) 00:18:08.628 fused_ordering(452) 00:18:08.628 fused_ordering(453) 00:18:08.628 fused_ordering(454) 00:18:08.628 fused_ordering(455) 00:18:08.628 fused_ordering(456) 00:18:08.628 fused_ordering(457) 00:18:08.628 fused_ordering(458) 00:18:08.628 fused_ordering(459) 00:18:08.628 fused_ordering(460) 00:18:08.628 fused_ordering(461) 00:18:08.628 fused_ordering(462) 00:18:08.628 fused_ordering(463) 00:18:08.628 fused_ordering(464) 00:18:08.628 fused_ordering(465) 00:18:08.628 fused_ordering(466) 00:18:08.628 fused_ordering(467) 00:18:08.628 fused_ordering(468) 00:18:08.628 fused_ordering(469) 00:18:08.628 fused_ordering(470) 00:18:08.628 fused_ordering(471) 00:18:08.628 fused_ordering(472) 00:18:08.628 fused_ordering(473) 00:18:08.628 fused_ordering(474) 00:18:08.628 fused_ordering(475) 00:18:08.628 fused_ordering(476) 00:18:08.628 fused_ordering(477) 00:18:08.628 fused_ordering(478) 00:18:08.628 fused_ordering(479) 00:18:08.628 fused_ordering(480) 00:18:08.628 fused_ordering(481) 00:18:08.628 fused_ordering(482) 00:18:08.628 fused_ordering(483) 00:18:08.628 fused_ordering(484) 00:18:08.628 fused_ordering(485) 00:18:08.628 fused_ordering(486) 00:18:08.628 fused_ordering(487) 00:18:08.628 fused_ordering(488) 00:18:08.628 fused_ordering(489) 00:18:08.628 fused_ordering(490) 00:18:08.628 fused_ordering(491) 00:18:08.628 fused_ordering(492) 00:18:08.628 fused_ordering(493) 00:18:08.628 fused_ordering(494) 00:18:08.628 fused_ordering(495) 00:18:08.628 fused_ordering(496) 00:18:08.628 fused_ordering(497) 00:18:08.628 fused_ordering(498) 00:18:08.628 fused_ordering(499) 00:18:08.628 fused_ordering(500) 00:18:08.628 fused_ordering(501) 00:18:08.628 fused_ordering(502) 00:18:08.628 fused_ordering(503) 00:18:08.628 fused_ordering(504) 00:18:08.628 fused_ordering(505) 00:18:08.628 fused_ordering(506) 00:18:08.628 fused_ordering(507) 00:18:08.628 fused_ordering(508) 00:18:08.628 fused_ordering(509) 00:18:08.628 fused_ordering(510) 00:18:08.628 fused_ordering(511) 00:18:08.628 fused_ordering(512) 00:18:08.628 fused_ordering(513) 00:18:08.628 fused_ordering(514) 00:18:08.628 fused_ordering(515) 00:18:08.628 fused_ordering(516) 00:18:08.628 fused_ordering(517) 00:18:08.628 fused_ordering(518) 00:18:08.628 fused_ordering(519) 00:18:08.628 fused_ordering(520) 00:18:08.628 fused_ordering(521) 00:18:08.628 fused_ordering(522) 00:18:08.628 fused_ordering(523) 00:18:08.628 fused_ordering(524) 00:18:08.628 fused_ordering(525) 00:18:08.628 fused_ordering(526) 00:18:08.628 fused_ordering(527) 00:18:08.628 fused_ordering(528) 00:18:08.628 fused_ordering(529) 00:18:08.628 fused_ordering(530) 00:18:08.628 fused_ordering(531) 00:18:08.628 fused_ordering(532) 00:18:08.628 fused_ordering(533) 00:18:08.628 fused_ordering(534) 00:18:08.628 fused_ordering(535) 00:18:08.628 fused_ordering(536) 00:18:08.628 fused_ordering(537) 00:18:08.628 fused_ordering(538) 00:18:08.628 fused_ordering(539) 00:18:08.628 fused_ordering(540) 00:18:08.628 fused_ordering(541) 00:18:08.628 fused_ordering(542) 00:18:08.628 fused_ordering(543) 00:18:08.628 fused_ordering(544) 00:18:08.628 fused_ordering(545) 00:18:08.628 fused_ordering(546) 00:18:08.628 fused_ordering(547) 00:18:08.628 fused_ordering(548) 00:18:08.628 fused_ordering(549) 00:18:08.628 fused_ordering(550) 00:18:08.628 fused_ordering(551) 00:18:08.628 fused_ordering(552) 00:18:08.628 fused_ordering(553) 00:18:08.628 fused_ordering(554) 00:18:08.628 fused_ordering(555) 00:18:08.628 fused_ordering(556) 00:18:08.628 fused_ordering(557) 00:18:08.628 fused_ordering(558) 00:18:08.628 fused_ordering(559) 00:18:08.628 fused_ordering(560) 00:18:08.628 fused_ordering(561) 00:18:08.628 fused_ordering(562) 00:18:08.628 fused_ordering(563) 00:18:08.628 fused_ordering(564) 00:18:08.628 fused_ordering(565) 00:18:08.628 fused_ordering(566) 00:18:08.628 fused_ordering(567) 00:18:08.628 fused_ordering(568) 00:18:08.628 fused_ordering(569) 00:18:08.628 fused_ordering(570) 00:18:08.628 fused_ordering(571) 00:18:08.628 fused_ordering(572) 00:18:08.628 fused_ordering(573) 00:18:08.628 fused_ordering(574) 00:18:08.628 fused_ordering(575) 00:18:08.628 fused_ordering(576) 00:18:08.628 fused_ordering(577) 00:18:08.628 fused_ordering(578) 00:18:08.628 fused_ordering(579) 00:18:08.628 fused_ordering(580) 00:18:08.628 fused_ordering(581) 00:18:08.628 fused_ordering(582) 00:18:08.628 fused_ordering(583) 00:18:08.628 fused_ordering(584) 00:18:08.628 fused_ordering(585) 00:18:08.628 fused_ordering(586) 00:18:08.628 fused_ordering(587) 00:18:08.628 fused_ordering(588) 00:18:08.628 fused_ordering(589) 00:18:08.628 fused_ordering(590) 00:18:08.628 fused_ordering(591) 00:18:08.628 fused_ordering(592) 00:18:08.628 fused_ordering(593) 00:18:08.628 fused_ordering(594) 00:18:08.628 fused_ordering(595) 00:18:08.628 fused_ordering(596) 00:18:08.628 fused_ordering(597) 00:18:08.628 fused_ordering(598) 00:18:08.628 fused_ordering(599) 00:18:08.628 fused_ordering(600) 00:18:08.628 fused_ordering(601) 00:18:08.628 fused_ordering(602) 00:18:08.628 fused_ordering(603) 00:18:08.628 fused_ordering(604) 00:18:08.628 fused_ordering(605) 00:18:08.628 fused_ordering(606) 00:18:08.628 fused_ordering(607) 00:18:08.628 fused_ordering(608) 00:18:08.628 fused_ordering(609) 00:18:08.628 fused_ordering(610) 00:18:08.628 fused_ordering(611) 00:18:08.628 fused_ordering(612) 00:18:08.628 fused_ordering(613) 00:18:08.628 fused_ordering(614) 00:18:08.628 fused_ordering(615) 00:18:09.566 fused_ordering(616) 00:18:09.566 fused_ordering(617) 00:18:09.566 fused_ordering(618) 00:18:09.566 fused_ordering(619) 00:18:09.566 fused_ordering(620) 00:18:09.566 fused_ordering(621) 00:18:09.566 fused_ordering(622) 00:18:09.566 fused_ordering(623) 00:18:09.566 fused_ordering(624) 00:18:09.566 fused_ordering(625) 00:18:09.566 fused_ordering(626) 00:18:09.566 fused_ordering(627) 00:18:09.566 fused_ordering(628) 00:18:09.566 fused_ordering(629) 00:18:09.566 fused_ordering(630) 00:18:09.566 fused_ordering(631) 00:18:09.566 fused_ordering(632) 00:18:09.566 fused_ordering(633) 00:18:09.566 fused_ordering(634) 00:18:09.566 fused_ordering(635) 00:18:09.566 fused_ordering(636) 00:18:09.566 fused_ordering(637) 00:18:09.566 fused_ordering(638) 00:18:09.566 fused_ordering(639) 00:18:09.566 fused_ordering(640) 00:18:09.566 fused_ordering(641) 00:18:09.566 fused_ordering(642) 00:18:09.566 fused_ordering(643) 00:18:09.566 fused_ordering(644) 00:18:09.566 fused_ordering(645) 00:18:09.566 fused_ordering(646) 00:18:09.566 fused_ordering(647) 00:18:09.566 fused_ordering(648) 00:18:09.566 fused_ordering(649) 00:18:09.566 fused_ordering(650) 00:18:09.566 fused_ordering(651) 00:18:09.566 fused_ordering(652) 00:18:09.566 fused_ordering(653) 00:18:09.566 fused_ordering(654) 00:18:09.566 fused_ordering(655) 00:18:09.566 fused_ordering(656) 00:18:09.566 fused_ordering(657) 00:18:09.566 fused_ordering(658) 00:18:09.566 fused_ordering(659) 00:18:09.566 fused_ordering(660) 00:18:09.566 fused_ordering(661) 00:18:09.566 fused_ordering(662) 00:18:09.566 fused_ordering(663) 00:18:09.566 fused_ordering(664) 00:18:09.566 fused_ordering(665) 00:18:09.566 fused_ordering(666) 00:18:09.566 fused_ordering(667) 00:18:09.566 fused_ordering(668) 00:18:09.566 fused_ordering(669) 00:18:09.566 fused_ordering(670) 00:18:09.566 fused_ordering(671) 00:18:09.566 fused_ordering(672) 00:18:09.566 fused_ordering(673) 00:18:09.566 fused_ordering(674) 00:18:09.566 fused_ordering(675) 00:18:09.566 fused_ordering(676) 00:18:09.566 fused_ordering(677) 00:18:09.566 fused_ordering(678) 00:18:09.566 fused_ordering(679) 00:18:09.566 fused_ordering(680) 00:18:09.566 fused_ordering(681) 00:18:09.566 fused_ordering(682) 00:18:09.566 fused_ordering(683) 00:18:09.566 fused_ordering(684) 00:18:09.566 fused_ordering(685) 00:18:09.566 fused_ordering(686) 00:18:09.566 fused_ordering(687) 00:18:09.566 fused_ordering(688) 00:18:09.566 fused_ordering(689) 00:18:09.566 fused_ordering(690) 00:18:09.566 fused_ordering(691) 00:18:09.566 fused_ordering(692) 00:18:09.566 fused_ordering(693) 00:18:09.566 fused_ordering(694) 00:18:09.566 fused_ordering(695) 00:18:09.566 fused_ordering(696) 00:18:09.566 fused_ordering(697) 00:18:09.566 fused_ordering(698) 00:18:09.566 fused_ordering(699) 00:18:09.566 fused_ordering(700) 00:18:09.566 fused_ordering(701) 00:18:09.566 fused_ordering(702) 00:18:09.566 fused_ordering(703) 00:18:09.566 fused_ordering(704) 00:18:09.566 fused_ordering(705) 00:18:09.566 fused_ordering(706) 00:18:09.566 fused_ordering(707) 00:18:09.566 fused_ordering(708) 00:18:09.566 fused_ordering(709) 00:18:09.566 fused_ordering(710) 00:18:09.566 fused_ordering(711) 00:18:09.566 fused_ordering(712) 00:18:09.566 fused_ordering(713) 00:18:09.566 fused_ordering(714) 00:18:09.566 fused_ordering(715) 00:18:09.566 fused_ordering(716) 00:18:09.566 fused_ordering(717) 00:18:09.566 fused_ordering(718) 00:18:09.566 fused_ordering(719) 00:18:09.566 fused_ordering(720) 00:18:09.566 fused_ordering(721) 00:18:09.566 fused_ordering(722) 00:18:09.566 fused_ordering(723) 00:18:09.566 fused_ordering(724) 00:18:09.566 fused_ordering(725) 00:18:09.566 fused_ordering(726) 00:18:09.566 fused_ordering(727) 00:18:09.566 fused_ordering(728) 00:18:09.566 fused_ordering(729) 00:18:09.566 fused_ordering(730) 00:18:09.566 fused_ordering(731) 00:18:09.566 fused_ordering(732) 00:18:09.566 fused_ordering(733) 00:18:09.566 fused_ordering(734) 00:18:09.567 fused_ordering(735) 00:18:09.567 fused_ordering(736) 00:18:09.567 fused_ordering(737) 00:18:09.567 fused_ordering(738) 00:18:09.567 fused_ordering(739) 00:18:09.567 fused_ordering(740) 00:18:09.567 fused_ordering(741) 00:18:09.567 fused_ordering(742) 00:18:09.567 fused_ordering(743) 00:18:09.567 fused_ordering(744) 00:18:09.567 fused_ordering(745) 00:18:09.567 fused_ordering(746) 00:18:09.567 fused_ordering(747) 00:18:09.567 fused_ordering(748) 00:18:09.567 fused_ordering(749) 00:18:09.567 fused_ordering(750) 00:18:09.567 fused_ordering(751) 00:18:09.567 fused_ordering(752) 00:18:09.567 fused_ordering(753) 00:18:09.567 fused_ordering(754) 00:18:09.567 fused_ordering(755) 00:18:09.567 fused_ordering(756) 00:18:09.567 fused_ordering(757) 00:18:09.567 fused_ordering(758) 00:18:09.567 fused_ordering(759) 00:18:09.567 fused_ordering(760) 00:18:09.567 fused_ordering(761) 00:18:09.567 fused_ordering(762) 00:18:09.567 fused_ordering(763) 00:18:09.567 fused_ordering(764) 00:18:09.567 fused_ordering(765) 00:18:09.567 fused_ordering(766) 00:18:09.567 fused_ordering(767) 00:18:09.567 fused_ordering(768) 00:18:09.567 fused_ordering(769) 00:18:09.567 fused_ordering(770) 00:18:09.567 fused_ordering(771) 00:18:09.567 fused_ordering(772) 00:18:09.567 fused_ordering(773) 00:18:09.567 fused_ordering(774) 00:18:09.567 fused_ordering(775) 00:18:09.567 fused_ordering(776) 00:18:09.567 fused_ordering(777) 00:18:09.567 fused_ordering(778) 00:18:09.567 fused_ordering(779) 00:18:09.567 fused_ordering(780) 00:18:09.567 fused_ordering(781) 00:18:09.567 fused_ordering(782) 00:18:09.567 fused_ordering(783) 00:18:09.567 fused_ordering(784) 00:18:09.567 fused_ordering(785) 00:18:09.567 fused_ordering(786) 00:18:09.567 fused_ordering(787) 00:18:09.567 fused_ordering(788) 00:18:09.567 fused_ordering(789) 00:18:09.567 fused_ordering(790) 00:18:09.567 fused_ordering(791) 00:18:09.567 fused_ordering(792) 00:18:09.567 fused_ordering(793) 00:18:09.567 fused_ordering(794) 00:18:09.567 fused_ordering(795) 00:18:09.567 fused_ordering(796) 00:18:09.567 fused_ordering(797) 00:18:09.567 fused_ordering(798) 00:18:09.567 fused_ordering(799) 00:18:09.567 fused_ordering(800) 00:18:09.567 fused_ordering(801) 00:18:09.567 fused_ordering(802) 00:18:09.567 fused_ordering(803) 00:18:09.567 fused_ordering(804) 00:18:09.567 fused_ordering(805) 00:18:09.567 fused_ordering(806) 00:18:09.567 fused_ordering(807) 00:18:09.567 fused_ordering(808) 00:18:09.567 fused_ordering(809) 00:18:09.567 fused_ordering(810) 00:18:09.567 fused_ordering(811) 00:18:09.567 fused_ordering(812) 00:18:09.567 fused_ordering(813) 00:18:09.567 fused_ordering(814) 00:18:09.567 fused_ordering(815) 00:18:09.567 fused_ordering(816) 00:18:09.567 fused_ordering(817) 00:18:09.567 fused_ordering(818) 00:18:09.567 fused_ordering(819) 00:18:09.567 fused_ordering(820) 00:18:10.503 fused_ordering(821) 00:18:10.503 fused_ordering(822) 00:18:10.503 fused_ordering(823) 00:18:10.503 fused_ordering(824) 00:18:10.503 fused_ordering(825) 00:18:10.503 fused_ordering(826) 00:18:10.503 fused_ordering(827) 00:18:10.503 fused_ordering(828) 00:18:10.503 fused_ordering(829) 00:18:10.503 fused_ordering(830) 00:18:10.503 fused_ordering(831) 00:18:10.503 fused_ordering(832) 00:18:10.503 fused_ordering(833) 00:18:10.503 fused_ordering(834) 00:18:10.503 fused_ordering(835) 00:18:10.503 fused_ordering(836) 00:18:10.503 fused_ordering(837) 00:18:10.503 fused_ordering(838) 00:18:10.503 fused_ordering(839) 00:18:10.503 fused_ordering(840) 00:18:10.503 fused_ordering(841) 00:18:10.503 fused_ordering(842) 00:18:10.503 fused_ordering(843) 00:18:10.503 fused_ordering(844) 00:18:10.503 fused_ordering(845) 00:18:10.503 fused_ordering(846) 00:18:10.503 fused_ordering(847) 00:18:10.503 fused_ordering(848) 00:18:10.503 fused_ordering(849) 00:18:10.503 fused_ordering(850) 00:18:10.503 fused_ordering(851) 00:18:10.503 fused_ordering(852) 00:18:10.503 fused_ordering(853) 00:18:10.503 fused_ordering(854) 00:18:10.503 fused_ordering(855) 00:18:10.503 fused_ordering(856) 00:18:10.503 fused_ordering(857) 00:18:10.503 fused_ordering(858) 00:18:10.503 fused_ordering(859) 00:18:10.503 fused_ordering(860) 00:18:10.503 fused_ordering(861) 00:18:10.503 fused_ordering(862) 00:18:10.503 fused_ordering(863) 00:18:10.503 fused_ordering(864) 00:18:10.503 fused_ordering(865) 00:18:10.503 fused_ordering(866) 00:18:10.503 fused_ordering(867) 00:18:10.503 fused_ordering(868) 00:18:10.503 fused_ordering(869) 00:18:10.503 fused_ordering(870) 00:18:10.503 fused_ordering(871) 00:18:10.503 fused_ordering(872) 00:18:10.503 fused_ordering(873) 00:18:10.503 fused_ordering(874) 00:18:10.503 fused_ordering(875) 00:18:10.503 fused_ordering(876) 00:18:10.503 fused_ordering(877) 00:18:10.503 fused_ordering(878) 00:18:10.503 fused_ordering(879) 00:18:10.503 fused_ordering(880) 00:18:10.503 fused_ordering(881) 00:18:10.503 fused_ordering(882) 00:18:10.503 fused_ordering(883) 00:18:10.503 fused_ordering(884) 00:18:10.503 fused_ordering(885) 00:18:10.503 fused_ordering(886) 00:18:10.503 fused_ordering(887) 00:18:10.503 fused_ordering(888) 00:18:10.503 fused_ordering(889) 00:18:10.503 fused_ordering(890) 00:18:10.503 fused_ordering(891) 00:18:10.503 fused_ordering(892) 00:18:10.503 fused_ordering(893) 00:18:10.503 fused_ordering(894) 00:18:10.503 fused_ordering(895) 00:18:10.503 fused_ordering(896) 00:18:10.503 fused_ordering(897) 00:18:10.503 fused_ordering(898) 00:18:10.503 fused_ordering(899) 00:18:10.503 fused_ordering(900) 00:18:10.503 fused_ordering(901) 00:18:10.503 fused_ordering(902) 00:18:10.503 fused_ordering(903) 00:18:10.503 fused_ordering(904) 00:18:10.503 fused_ordering(905) 00:18:10.503 fused_ordering(906) 00:18:10.503 fused_ordering(907) 00:18:10.503 fused_ordering(908) 00:18:10.503 fused_ordering(909) 00:18:10.503 fused_ordering(910) 00:18:10.503 fused_ordering(911) 00:18:10.503 fused_ordering(912) 00:18:10.503 fused_ordering(913) 00:18:10.503 fused_ordering(914) 00:18:10.503 fused_ordering(915) 00:18:10.503 fused_ordering(916) 00:18:10.503 fused_ordering(917) 00:18:10.503 fused_ordering(918) 00:18:10.503 fused_ordering(919) 00:18:10.503 fused_ordering(920) 00:18:10.503 fused_ordering(921) 00:18:10.504 fused_ordering(922) 00:18:10.504 fused_ordering(923) 00:18:10.504 fused_ordering(924) 00:18:10.504 fused_ordering(925) 00:18:10.504 fused_ordering(926) 00:18:10.504 fused_ordering(927) 00:18:10.504 fused_ordering(928) 00:18:10.504 fused_ordering(929) 00:18:10.504 fused_ordering(930) 00:18:10.504 fused_ordering(931) 00:18:10.504 fused_ordering(932) 00:18:10.504 fused_ordering(933) 00:18:10.504 fused_ordering(934) 00:18:10.504 fused_ordering(935) 00:18:10.504 fused_ordering(936) 00:18:10.504 fused_ordering(937) 00:18:10.504 fused_ordering(938) 00:18:10.504 fused_ordering(939) 00:18:10.504 fused_ordering(940) 00:18:10.504 fused_ordering(941) 00:18:10.504 fused_ordering(942) 00:18:10.504 fused_ordering(943) 00:18:10.504 fused_ordering(944) 00:18:10.504 fused_ordering(945) 00:18:10.504 fused_ordering(946) 00:18:10.504 fused_ordering(947) 00:18:10.504 fused_ordering(948) 00:18:10.504 fused_ordering(949) 00:18:10.504 fused_ordering(950) 00:18:10.504 fused_ordering(951) 00:18:10.504 fused_ordering(952) 00:18:10.504 fused_ordering(953) 00:18:10.504 fused_ordering(954) 00:18:10.504 fused_ordering(955) 00:18:10.504 fused_ordering(956) 00:18:10.504 fused_ordering(957) 00:18:10.504 fused_ordering(958) 00:18:10.504 fused_ordering(959) 00:18:10.504 fused_ordering(960) 00:18:10.504 fused_ordering(961) 00:18:10.504 fused_ordering(962) 00:18:10.504 fused_ordering(963) 00:18:10.504 fused_ordering(964) 00:18:10.504 fused_ordering(965) 00:18:10.504 fused_ordering(966) 00:18:10.504 fused_ordering(967) 00:18:10.504 fused_ordering(968) 00:18:10.504 fused_ordering(969) 00:18:10.504 fused_ordering(970) 00:18:10.504 fused_ordering(971) 00:18:10.504 fused_ordering(972) 00:18:10.504 fused_ordering(973) 00:18:10.504 fused_ordering(974) 00:18:10.504 fused_ordering(975) 00:18:10.504 fused_ordering(976) 00:18:10.504 fused_ordering(977) 00:18:10.504 fused_ordering(978) 00:18:10.504 fused_ordering(979) 00:18:10.504 fused_ordering(980) 00:18:10.504 fused_ordering(981) 00:18:10.504 fused_ordering(982) 00:18:10.504 fused_ordering(983) 00:18:10.504 fused_ordering(984) 00:18:10.504 fused_ordering(985) 00:18:10.504 fused_ordering(986) 00:18:10.504 fused_ordering(987) 00:18:10.504 fused_ordering(988) 00:18:10.504 fused_ordering(989) 00:18:10.504 fused_ordering(990) 00:18:10.504 fused_ordering(991) 00:18:10.504 fused_ordering(992) 00:18:10.504 fused_ordering(993) 00:18:10.504 fused_ordering(994) 00:18:10.504 fused_ordering(995) 00:18:10.504 fused_ordering(996) 00:18:10.504 fused_ordering(997) 00:18:10.504 fused_ordering(998) 00:18:10.504 fused_ordering(999) 00:18:10.504 fused_ordering(1000) 00:18:10.504 fused_ordering(1001) 00:18:10.504 fused_ordering(1002) 00:18:10.504 fused_ordering(1003) 00:18:10.504 fused_ordering(1004) 00:18:10.504 fused_ordering(1005) 00:18:10.504 fused_ordering(1006) 00:18:10.504 fused_ordering(1007) 00:18:10.504 fused_ordering(1008) 00:18:10.504 fused_ordering(1009) 00:18:10.504 fused_ordering(1010) 00:18:10.504 fused_ordering(1011) 00:18:10.504 fused_ordering(1012) 00:18:10.504 fused_ordering(1013) 00:18:10.504 fused_ordering(1014) 00:18:10.504 fused_ordering(1015) 00:18:10.504 fused_ordering(1016) 00:18:10.504 fused_ordering(1017) 00:18:10.504 fused_ordering(1018) 00:18:10.504 fused_ordering(1019) 00:18:10.504 fused_ordering(1020) 00:18:10.504 fused_ordering(1021) 00:18:10.504 fused_ordering(1022) 00:18:10.504 fused_ordering(1023) 00:18:10.504 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:10.504 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:10.504 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:10.504 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:18:10.504 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:10.504 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:18:10.504 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:10.504 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:10.504 rmmod nvme_tcp 00:18:10.504 rmmod nvme_fabrics 00:18:10.504 rmmod nvme_keyring 00:18:10.504 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:10.504 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:18:10.504 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:18:10.504 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2952286 ']' 00:18:10.504 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2952286 00:18:10.504 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2952286 ']' 00:18:10.504 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2952286 00:18:10.504 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:18:10.504 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.504 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2952286 00:18:10.504 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:10.504 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:10.504 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2952286' 00:18:10.504 killing process with pid 2952286 00:18:10.504 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2952286 00:18:10.504 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2952286 00:18:11.886 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:11.886 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:11.886 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:11.886 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:18:11.886 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:18:11.886 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:11.886 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:18:11.886 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:11.886 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:11.886 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.886 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:11.886 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:13.797 00:18:13.797 real 0m10.493s 00:18:13.797 user 0m8.989s 00:18:13.797 sys 0m3.716s 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:13.797 ************************************ 00:18:13.797 END TEST nvmf_fused_ordering 00:18:13.797 ************************************ 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:13.797 ************************************ 00:18:13.797 START TEST nvmf_ns_masking 00:18:13.797 ************************************ 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:18:13.797 * Looking for test storage... 00:18:13.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:13.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.797 --rc genhtml_branch_coverage=1 00:18:13.797 --rc genhtml_function_coverage=1 00:18:13.797 --rc genhtml_legend=1 00:18:13.797 --rc geninfo_all_blocks=1 00:18:13.797 --rc geninfo_unexecuted_blocks=1 00:18:13.797 00:18:13.797 ' 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:13.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.797 --rc genhtml_branch_coverage=1 00:18:13.797 --rc genhtml_function_coverage=1 00:18:13.797 --rc genhtml_legend=1 00:18:13.797 --rc geninfo_all_blocks=1 00:18:13.797 --rc geninfo_unexecuted_blocks=1 00:18:13.797 00:18:13.797 ' 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:13.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.797 --rc genhtml_branch_coverage=1 00:18:13.797 --rc genhtml_function_coverage=1 00:18:13.797 --rc genhtml_legend=1 00:18:13.797 --rc geninfo_all_blocks=1 00:18:13.797 --rc geninfo_unexecuted_blocks=1 00:18:13.797 00:18:13.797 ' 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:13.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.797 --rc genhtml_branch_coverage=1 00:18:13.797 --rc genhtml_function_coverage=1 00:18:13.797 --rc genhtml_legend=1 00:18:13.797 --rc geninfo_all_blocks=1 00:18:13.797 --rc geninfo_unexecuted_blocks=1 00:18:13.797 00:18:13.797 ' 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.797 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:13.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=82a66fbf-4d18-4f32-96fa-31f2bce4de2e 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=03dd1882-637e-4b49-be82-db163a43d619 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=a112e50d-371c-427c-8e0b-6df9491c2ea7 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:18:13.798 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:16.337 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:16.337 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:16.337 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:16.338 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:16.338 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:16.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:16.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:18:16.338 00:18:16.338 --- 10.0.0.2 ping statistics --- 00:18:16.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.338 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:16.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:16.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:18:16.338 00:18:16.338 --- 10.0.0.1 ping statistics --- 00:18:16.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.338 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2955033 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2955033 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2955033 ']' 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:16.338 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:16.338 [2024-11-18 11:46:41.899970] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:16.338 [2024-11-18 11:46:41.900104] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.338 [2024-11-18 11:46:42.050586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.338 [2024-11-18 11:46:42.190287] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.338 [2024-11-18 11:46:42.190391] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.338 [2024-11-18 11:46:42.190417] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:16.338 [2024-11-18 11:46:42.190442] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:16.338 [2024-11-18 11:46:42.190463] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.338 [2024-11-18 11:46:42.192129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.275 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:17.275 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:17.275 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:17.275 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:17.275 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:17.275 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.275 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:17.533 [2024-11-18 11:46:43.186910] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:17.533 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:17.533 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:17.533 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:17.791 Malloc1 00:18:17.791 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:18.048 Malloc2 00:18:18.049 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:18.618 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:18.618 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:18.877 [2024-11-18 11:46:44.705918] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:18.877 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:18.877 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a112e50d-371c-427c-8e0b-6df9491c2ea7 -a 10.0.0.2 -s 4420 -i 4 00:18:19.137 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:19.137 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:19.137 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:19.137 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:19.138 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:21.046 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:21.046 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:21.046 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:21.046 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:21.046 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:21.046 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:21.046 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:21.046 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:21.046 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:21.046 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:21.046 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:21.046 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:21.046 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:21.305 [ 0]:0x1 00:18:21.305 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:21.305 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:21.305 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=90fc770425094edaa74b64d13972550f 00:18:21.305 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 90fc770425094edaa74b64d13972550f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:21.305 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:21.563 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:21.563 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:21.563 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:21.563 [ 0]:0x1 00:18:21.563 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:21.563 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:21.563 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=90fc770425094edaa74b64d13972550f 00:18:21.563 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 90fc770425094edaa74b64d13972550f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:21.563 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:21.563 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:21.563 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:21.563 [ 1]:0x2 00:18:21.563 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:21.563 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:21.563 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=09293c11e43a4d649d694c1b08fe8a9c 00:18:21.564 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 09293c11e43a4d649d694c1b08fe8a9c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:21.564 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:21.564 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:21.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:21.821 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:22.388 11:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:22.647 11:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:22.647 11:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a112e50d-371c-427c-8e0b-6df9491c2ea7 -a 10.0.0.2 -s 4420 -i 4 00:18:22.648 11:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:22.648 11:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:22.648 11:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:22.648 11:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:18:22.648 11:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:18:22.648 11:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:25.184 [ 0]:0x2 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=09293c11e43a4d649d694c1b08fe8a9c 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 09293c11e43a4d649d694c1b08fe8a9c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:25.184 [ 0]:0x1 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:25.184 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:25.184 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=90fc770425094edaa74b64d13972550f 00:18:25.184 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 90fc770425094edaa74b64d13972550f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:25.184 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:25.184 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:25.184 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:25.184 [ 1]:0x2 00:18:25.184 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:25.184 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:25.184 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=09293c11e43a4d649d694c1b08fe8a9c 00:18:25.184 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 09293c11e43a4d649d694c1b08fe8a9c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:25.184 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:25.754 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:25.754 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:25.754 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:25.754 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:25.754 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.754 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:25.754 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.754 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:25.754 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:25.754 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:25.754 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:25.754 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:25.754 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:25.754 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:25.754 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:25.754 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:25.754 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:25.754 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:25.754 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:25.754 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:25.754 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:25.754 [ 0]:0x2 00:18:25.754 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:25.754 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:25.754 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=09293c11e43a4d649d694c1b08fe8a9c 00:18:25.754 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 09293c11e43a4d649d694c1b08fe8a9c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:25.754 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:25.754 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:25.754 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:25.754 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:26.012 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:26.013 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a112e50d-371c-427c-8e0b-6df9491c2ea7 -a 10.0.0.2 -s 4420 -i 4 00:18:26.271 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:26.271 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:26.271 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:26.271 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:26.271 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:26.271 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:28.176 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:28.176 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:28.176 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:28.176 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:28.176 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:28.176 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:28.176 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:28.177 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:28.177 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:28.177 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:28.177 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:28.177 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:28.177 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:28.177 [ 0]:0x1 00:18:28.177 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:28.177 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:28.177 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=90fc770425094edaa74b64d13972550f 00:18:28.177 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 90fc770425094edaa74b64d13972550f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:28.177 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:28.177 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:28.177 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:28.177 [ 1]:0x2 00:18:28.177 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:28.177 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:28.435 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=09293c11e43a4d649d694c1b08fe8a9c 00:18:28.435 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 09293c11e43a4d649d694c1b08fe8a9c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:28.435 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:28.694 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:28.694 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:28.694 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:28.694 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:28.694 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.694 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:28.694 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.694 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:28.694 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:28.694 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:28.694 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:28.694 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:28.694 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:28.694 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:28.694 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:28.694 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:28.694 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:28.694 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:28.694 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:28.694 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:28.694 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:28.694 [ 0]:0x2 00:18:28.694 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:28.694 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:28.694 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=09293c11e43a4d649d694c1b08fe8a9c 00:18:28.694 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 09293c11e43a4d649d694c1b08fe8a9c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:28.694 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:28.694 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:28.694 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:28.694 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:28.694 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.694 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:28.694 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.694 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:28.694 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.694 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:28.694 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:28.694 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:28.952 [2024-11-18 11:46:54.754408] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:28.952 request: 00:18:28.952 { 00:18:28.952 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.952 "nsid": 2, 00:18:28.952 "host": "nqn.2016-06.io.spdk:host1", 00:18:28.952 "method": "nvmf_ns_remove_host", 00:18:28.953 "req_id": 1 00:18:28.953 } 00:18:28.953 Got JSON-RPC error response 00:18:28.953 response: 00:18:28.953 { 00:18:28.953 "code": -32602, 00:18:28.953 "message": "Invalid parameters" 00:18:28.953 } 00:18:28.953 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:28.953 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:28.953 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:28.953 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:28.953 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:28.953 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:28.953 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:28.953 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:28.953 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.953 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:28.953 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.953 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:28.953 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:28.953 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:28.953 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:28.953 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:28.953 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:28.953 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:28.953 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:28.953 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:28.953 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:28.953 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:28.953 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:28.953 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:28.953 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:28.953 [ 0]:0x2 00:18:28.953 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:28.953 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:29.211 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=09293c11e43a4d649d694c1b08fe8a9c 00:18:29.211 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 09293c11e43a4d649d694c1b08fe8a9c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:29.211 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:29.211 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:29.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:29.211 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2956669 00:18:29.211 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:29.211 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2956669 /var/tmp/host.sock 00:18:29.211 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2956669 ']' 00:18:29.211 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:29.211 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:29.211 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:29.211 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:29.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:29.211 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:29.211 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:29.211 [2024-11-18 11:46:55.006521] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:29.211 [2024-11-18 11:46:55.006666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2956669 ] 00:18:29.469 [2024-11-18 11:46:55.143367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.469 [2024-11-18 11:46:55.280636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:30.403 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:30.403 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:30.403 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:30.971 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:31.229 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 82a66fbf-4d18-4f32-96fa-31f2bce4de2e 00:18:31.229 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:31.229 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 82A66FBF4D184F3296FA31F2BCE4DE2E -i 00:18:31.486 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 03dd1882-637e-4b49-be82-db163a43d619 00:18:31.486 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:31.486 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 03DD1882637E4B49BE82DB163A43D619 -i 00:18:31.743 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:32.001 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:32.259 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:32.259 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:32.826 nvme0n1 00:18:32.826 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:32.826 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:33.084 nvme1n2 00:18:33.084 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:33.084 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:33.084 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:33.084 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:33.084 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:33.651 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:33.651 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:33.651 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:33.651 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:33.651 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 82a66fbf-4d18-4f32-96fa-31f2bce4de2e == \8\2\a\6\6\f\b\f\-\4\d\1\8\-\4\f\3\2\-\9\6\f\a\-\3\1\f\2\b\c\e\4\d\e\2\e ]] 00:18:33.651 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:33.651 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:33.651 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:33.909 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 03dd1882-637e-4b49-be82-db163a43d619 == \0\3\d\d\1\8\8\2\-\6\3\7\e\-\4\b\4\9\-\b\e\8\2\-\d\b\1\6\3\a\4\3\d\6\1\9 ]] 00:18:33.909 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:34.168 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:34.734 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 82a66fbf-4d18-4f32-96fa-31f2bce4de2e 00:18:34.734 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:34.734 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 82A66FBF4D184F3296FA31F2BCE4DE2E 00:18:34.734 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:34.734 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 82A66FBF4D184F3296FA31F2BCE4DE2E 00:18:34.734 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:34.734 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.734 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:34.734 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.734 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:34.734 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.734 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:34.734 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:34.734 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 82A66FBF4D184F3296FA31F2BCE4DE2E 00:18:34.734 [2024-11-18 11:47:00.584882] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:18:34.734 [2024-11-18 11:47:00.584944] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:18:34.734 [2024-11-18 11:47:00.584981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.734 request: 00:18:34.734 { 00:18:34.734 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.734 "namespace": { 00:18:34.734 "bdev_name": "invalid", 00:18:34.734 "nsid": 1, 00:18:34.734 "nguid": "82A66FBF4D184F3296FA31F2BCE4DE2E", 00:18:34.734 "no_auto_visible": false 00:18:34.734 }, 00:18:34.734 "method": "nvmf_subsystem_add_ns", 00:18:34.734 "req_id": 1 00:18:34.734 } 00:18:34.734 Got JSON-RPC error response 00:18:34.734 response: 00:18:34.734 { 00:18:34.734 "code": -32602, 00:18:34.734 "message": "Invalid parameters" 00:18:34.734 } 00:18:34.734 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:34.734 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:34.734 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:34.734 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:34.734 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 82a66fbf-4d18-4f32-96fa-31f2bce4de2e 00:18:34.734 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:34.734 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 82A66FBF4D184F3296FA31F2BCE4DE2E -i 00:18:34.993 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:18:37.527 11:47:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:18:37.527 11:47:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:18:37.527 11:47:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:37.527 11:47:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:18:37.527 11:47:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2956669 00:18:37.527 11:47:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2956669 ']' 00:18:37.527 11:47:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2956669 00:18:37.527 11:47:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:37.527 11:47:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:37.527 11:47:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2956669 00:18:37.527 11:47:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:37.527 11:47:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:37.527 11:47:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2956669' 00:18:37.527 killing process with pid 2956669 00:18:37.527 11:47:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2956669 00:18:37.527 11:47:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2956669 00:18:40.069 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:40.069 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:40.069 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:18:40.069 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:40.069 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:40.069 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:40.069 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:18:40.069 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:40.069 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:40.069 rmmod nvme_tcp 00:18:40.069 rmmod nvme_fabrics 00:18:40.069 rmmod nvme_keyring 00:18:40.069 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:40.069 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:18:40.069 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:18:40.069 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2955033 ']' 00:18:40.069 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2955033 00:18:40.069 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2955033 ']' 00:18:40.069 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2955033 00:18:40.069 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:40.069 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:40.069 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2955033 00:18:40.069 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:40.069 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:40.069 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2955033' 00:18:40.069 killing process with pid 2955033 00:18:40.069 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2955033 00:18:40.069 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2955033 00:18:41.995 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:41.995 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:41.995 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:41.995 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:18:41.995 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:18:41.995 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:41.995 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:18:41.995 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:41.995 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:41.995 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.995 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:41.995 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:43.945 00:18:43.945 real 0m29.961s 00:18:43.945 user 0m44.463s 00:18:43.945 sys 0m5.071s 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:43.945 ************************************ 00:18:43.945 END TEST nvmf_ns_masking 00:18:43.945 ************************************ 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:43.945 ************************************ 00:18:43.945 START TEST nvmf_nvme_cli 00:18:43.945 ************************************ 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:43.945 * Looking for test storage... 00:18:43.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:43.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.945 --rc genhtml_branch_coverage=1 00:18:43.945 --rc genhtml_function_coverage=1 00:18:43.945 --rc genhtml_legend=1 00:18:43.945 --rc geninfo_all_blocks=1 00:18:43.945 --rc geninfo_unexecuted_blocks=1 00:18:43.945 00:18:43.945 ' 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:43.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.945 --rc genhtml_branch_coverage=1 00:18:43.945 --rc genhtml_function_coverage=1 00:18:43.945 --rc genhtml_legend=1 00:18:43.945 --rc geninfo_all_blocks=1 00:18:43.945 --rc geninfo_unexecuted_blocks=1 00:18:43.945 00:18:43.945 ' 00:18:43.945 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:43.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.945 --rc genhtml_branch_coverage=1 00:18:43.945 --rc genhtml_function_coverage=1 00:18:43.945 --rc genhtml_legend=1 00:18:43.945 --rc geninfo_all_blocks=1 00:18:43.946 --rc geninfo_unexecuted_blocks=1 00:18:43.946 00:18:43.946 ' 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:43.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.946 --rc genhtml_branch_coverage=1 00:18:43.946 --rc genhtml_function_coverage=1 00:18:43.946 --rc genhtml_legend=1 00:18:43.946 --rc geninfo_all_blocks=1 00:18:43.946 --rc geninfo_unexecuted_blocks=1 00:18:43.946 00:18:43.946 ' 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:43.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:43.946 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:45.854 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:45.854 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:45.854 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:45.854 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:45.854 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:45.855 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:45.855 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:45.855 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:45.855 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:45.855 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:46.113 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:46.113 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:46.113 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:46.113 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:46.113 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:46.113 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:46.113 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:46.113 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:46.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:46.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:18:46.113 00:18:46.113 --- 10.0.0.2 ping statistics --- 00:18:46.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.113 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:18:46.113 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:46.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:46.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:18:46.113 00:18:46.113 --- 10.0.0.1 ping statistics --- 00:18:46.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.113 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:18:46.113 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:46.113 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:18:46.113 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:46.113 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:46.113 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:46.113 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:46.113 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:46.113 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:46.113 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:46.113 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:46.113 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:46.113 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:46.113 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:46.113 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2960112 00:18:46.113 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:46.113 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2960112 00:18:46.113 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2960112 ']' 00:18:46.113 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.113 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:46.113 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.113 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:46.113 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:46.113 [2024-11-18 11:47:11.987529] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:46.113 [2024-11-18 11:47:11.987684] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:46.372 [2024-11-18 11:47:12.145667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:46.631 [2024-11-18 11:47:12.292424] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:46.631 [2024-11-18 11:47:12.292523] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:46.631 [2024-11-18 11:47:12.292550] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:46.631 [2024-11-18 11:47:12.292576] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:46.631 [2024-11-18 11:47:12.292596] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:46.631 [2024-11-18 11:47:12.295456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.631 [2024-11-18 11:47:12.295527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:46.631 [2024-11-18 11:47:12.295564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.631 [2024-11-18 11:47:12.295570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:47.197 11:47:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:47.197 11:47:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:18:47.197 11:47:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:47.197 11:47:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:47.197 11:47:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:47.197 11:47:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:47.197 11:47:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:47.197 11:47:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.197 11:47:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:47.197 [2024-11-18 11:47:12.968460] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:47.197 11:47:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.197 11:47:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:47.197 11:47:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.197 11:47:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:47.197 Malloc0 00:18:47.197 11:47:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.197 11:47:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:47.197 11:47:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.197 11:47:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:47.457 Malloc1 00:18:47.457 11:47:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.457 11:47:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:47.457 11:47:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.457 11:47:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:47.457 11:47:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.457 11:47:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:47.457 11:47:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.457 11:47:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:47.457 11:47:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.457 11:47:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:47.457 11:47:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.457 11:47:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:47.457 11:47:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.457 11:47:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:47.457 11:47:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.457 11:47:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:47.457 [2024-11-18 11:47:13.172593] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:47.457 11:47:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.457 11:47:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:47.457 11:47:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.457 11:47:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:47.457 11:47:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.457 11:47:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:18:47.717 00:18:47.717 Discovery Log Number of Records 2, Generation counter 2 00:18:47.717 =====Discovery Log Entry 0====== 00:18:47.717 trtype: tcp 00:18:47.717 adrfam: ipv4 00:18:47.717 subtype: current discovery subsystem 00:18:47.717 treq: not required 00:18:47.717 portid: 0 00:18:47.717 trsvcid: 4420 00:18:47.717 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:47.717 traddr: 10.0.0.2 00:18:47.717 eflags: explicit discovery connections, duplicate discovery information 00:18:47.717 sectype: none 00:18:47.717 =====Discovery Log Entry 1====== 00:18:47.717 trtype: tcp 00:18:47.717 adrfam: ipv4 00:18:47.717 subtype: nvme subsystem 00:18:47.717 treq: not required 00:18:47.717 portid: 0 00:18:47.717 trsvcid: 4420 00:18:47.717 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:47.717 traddr: 10.0.0.2 00:18:47.717 eflags: none 00:18:47.717 sectype: none 00:18:47.717 11:47:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:47.717 11:47:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:47.717 11:47:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:47.717 11:47:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:47.717 11:47:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:47.717 11:47:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:47.717 11:47:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:47.717 11:47:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:47.718 11:47:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:47.718 11:47:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:47.718 11:47:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:48.285 11:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:48.285 11:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:18:48.285 11:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:48.285 11:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:48.285 11:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:48.285 11:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:18:50.188 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:50.188 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:50.188 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:50.188 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:50.188 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:50.188 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:18:50.188 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:50.188 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:50.188 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:50.188 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:50.446 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:50.446 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:50.446 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:50.446 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:50.446 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:50.446 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:50.446 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:50.446 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:50.446 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:50.446 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:50.446 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:50.446 /dev/nvme0n2 ]] 00:18:50.446 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:50.446 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:50.446 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:50.446 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:50.446 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:50.706 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:50.706 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:50.706 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:50.706 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:50.706 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:50.706 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:50.706 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:50.706 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:50.706 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:50.706 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:50.706 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:50.706 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:50.966 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:50.966 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:50.966 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:18:50.966 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:50.966 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:50.966 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:50.966 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:50.966 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:18:50.966 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:50.966 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:50.966 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.966 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:50.966 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.966 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:50.966 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:50.966 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:50.966 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:50.966 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:50.966 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:50.966 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:50.966 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:50.966 rmmod nvme_tcp 00:18:50.966 rmmod nvme_fabrics 00:18:50.966 rmmod nvme_keyring 00:18:50.966 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:50.966 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:50.966 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:50.966 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2960112 ']' 00:18:50.967 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2960112 00:18:50.967 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2960112 ']' 00:18:50.967 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2960112 00:18:50.967 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:18:50.967 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:50.967 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2960112 00:18:50.967 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:50.967 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:50.967 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2960112' 00:18:50.967 killing process with pid 2960112 00:18:50.967 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2960112 00:18:50.967 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2960112 00:18:52.870 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:52.870 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:52.870 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:52.870 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:52.870 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:18:52.870 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:18:52.870 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:52.870 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:52.870 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:52.870 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.870 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:52.870 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.778 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:54.778 00:18:54.778 real 0m10.824s 00:18:54.778 user 0m23.238s 00:18:54.778 sys 0m2.635s 00:18:54.778 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:54.778 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:54.778 ************************************ 00:18:54.778 END TEST nvmf_nvme_cli 00:18:54.778 ************************************ 00:18:54.778 11:47:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:18:54.778 11:47:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:54.778 11:47:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:54.778 11:47:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:54.778 11:47:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:54.778 ************************************ 00:18:54.779 START TEST nvmf_auth_target 00:18:54.779 ************************************ 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:54.779 * Looking for test storage... 00:18:54.779 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:54.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.779 --rc genhtml_branch_coverage=1 00:18:54.779 --rc genhtml_function_coverage=1 00:18:54.779 --rc genhtml_legend=1 00:18:54.779 --rc geninfo_all_blocks=1 00:18:54.779 --rc geninfo_unexecuted_blocks=1 00:18:54.779 00:18:54.779 ' 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:54.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.779 --rc genhtml_branch_coverage=1 00:18:54.779 --rc genhtml_function_coverage=1 00:18:54.779 --rc genhtml_legend=1 00:18:54.779 --rc geninfo_all_blocks=1 00:18:54.779 --rc geninfo_unexecuted_blocks=1 00:18:54.779 00:18:54.779 ' 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:54.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.779 --rc genhtml_branch_coverage=1 00:18:54.779 --rc genhtml_function_coverage=1 00:18:54.779 --rc genhtml_legend=1 00:18:54.779 --rc geninfo_all_blocks=1 00:18:54.779 --rc geninfo_unexecuted_blocks=1 00:18:54.779 00:18:54.779 ' 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:54.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.779 --rc genhtml_branch_coverage=1 00:18:54.779 --rc genhtml_function_coverage=1 00:18:54.779 --rc genhtml_legend=1 00:18:54.779 --rc geninfo_all_blocks=1 00:18:54.779 --rc geninfo_unexecuted_blocks=1 00:18:54.779 00:18:54.779 ' 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:54.779 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:54.779 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:54.780 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:54.780 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:54.780 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:54.780 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:54.780 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:54.780 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:54.780 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:54.780 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:54.780 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:54.780 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:54.780 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:18:54.780 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:54.780 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:54.780 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:54.780 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:54.780 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:54.780 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.780 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:54.780 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.780 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:54.780 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:54.780 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:18:54.780 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:57.310 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:57.310 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:57.310 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:57.310 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:57.310 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:57.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:57.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:18:57.311 00:18:57.311 --- 10.0.0.2 ping statistics --- 00:18:57.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.311 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:57.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:57.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:18:57.311 00:18:57.311 --- 10.0.0.1 ping statistics --- 00:18:57.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.311 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2962770 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2962770 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2962770 ']' 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:57.311 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.251 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:58.251 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:58.251 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:58.251 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:58.251 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.251 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:58.251 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2962920 00:18:58.251 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:58.251 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:58.251 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:18:58.251 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:58.251 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:58.251 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:58.251 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:18:58.251 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:58.251 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:58.251 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0f2800735ec002022607210b4417c719ae1b3a6a33e8c711 00:18:58.251 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:58.251 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.IcZ 00:18:58.251 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0f2800735ec002022607210b4417c719ae1b3a6a33e8c711 0 00:18:58.251 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0f2800735ec002022607210b4417c719ae1b3a6a33e8c711 0 00:18:58.251 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:58.251 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:58.251 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0f2800735ec002022607210b4417c719ae1b3a6a33e8c711 00:18:58.251 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:18:58.251 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:58.251 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.IcZ 00:18:58.251 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.IcZ 00:18:58.251 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.IcZ 00:18:58.251 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:18:58.251 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:58.252 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:58.252 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:58.252 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:58.252 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:58.252 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:58.252 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d34efb661f95fffe5322960bc55d1d3701b8dd24e63a9a5a51e08af006a30f02 00:18:58.252 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:58.252 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.wwx 00:18:58.252 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d34efb661f95fffe5322960bc55d1d3701b8dd24e63a9a5a51e08af006a30f02 3 00:18:58.252 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d34efb661f95fffe5322960bc55d1d3701b8dd24e63a9a5a51e08af006a30f02 3 00:18:58.252 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:58.252 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:58.252 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d34efb661f95fffe5322960bc55d1d3701b8dd24e63a9a5a51e08af006a30f02 00:18:58.252 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:58.252 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:58.252 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.wwx 00:18:58.252 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.wwx 00:18:58.252 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.wwx 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bda8fc979973af860dfd71fff1538b56 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.IJ9 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bda8fc979973af860dfd71fff1538b56 1 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bda8fc979973af860dfd71fff1538b56 1 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bda8fc979973af860dfd71fff1538b56 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.IJ9 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.IJ9 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.IJ9 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=afbc40e8bf6a4d442d2327513322c55728d88bcd81ec570f 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.lvp 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key afbc40e8bf6a4d442d2327513322c55728d88bcd81ec570f 2 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 afbc40e8bf6a4d442d2327513322c55728d88bcd81ec570f 2 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=afbc40e8bf6a4d442d2327513322c55728d88bcd81ec570f 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.lvp 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.lvp 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.lvp 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b769dd7146e2943aac40afb2a86b3b2eb6a95941662e06e5 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.aWH 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b769dd7146e2943aac40afb2a86b3b2eb6a95941662e06e5 2 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b769dd7146e2943aac40afb2a86b3b2eb6a95941662e06e5 2 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b769dd7146e2943aac40afb2a86b3b2eb6a95941662e06e5 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:58.252 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.aWH 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.aWH 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.aWH 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=437202169dff948ffcb420794e64db5c 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.snA 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 437202169dff948ffcb420794e64db5c 1 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 437202169dff948ffcb420794e64db5c 1 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=437202169dff948ffcb420794e64db5c 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.snA 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.snA 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.snA 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4c75da85c78edc8dcfe39cefd794d9ba9400e5334176d3449f841548d758f2d9 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Cy2 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4c75da85c78edc8dcfe39cefd794d9ba9400e5334176d3449f841548d758f2d9 3 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4c75da85c78edc8dcfe39cefd794d9ba9400e5334176d3449f841548d758f2d9 3 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4c75da85c78edc8dcfe39cefd794d9ba9400e5334176d3449f841548d758f2d9 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Cy2 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Cy2 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Cy2 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2962770 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2962770 ']' 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:58.511 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.770 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:58.770 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:58.770 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2962920 /var/tmp/host.sock 00:18:58.770 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2962920 ']' 00:18:58.770 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:58.770 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:58.770 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:58.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:58.770 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:58.770 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.338 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:59.338 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:59.338 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:18:59.338 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.338 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.338 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.339 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:59.339 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.IcZ 00:18:59.339 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.339 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.339 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.339 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.IcZ 00:18:59.339 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.IcZ 00:18:59.597 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.wwx ]] 00:18:59.597 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wwx 00:18:59.597 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.597 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.597 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.597 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wwx 00:18:59.597 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wwx 00:18:59.857 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:59.857 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.IJ9 00:18:59.857 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.857 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.857 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.857 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.IJ9 00:18:59.857 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.IJ9 00:19:00.117 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.lvp ]] 00:19:00.117 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.lvp 00:19:00.117 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.117 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.117 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.117 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.lvp 00:19:00.375 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.lvp 00:19:00.634 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:00.634 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.aWH 00:19:00.634 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.634 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.634 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.634 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.aWH 00:19:00.634 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.aWH 00:19:00.893 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.snA ]] 00:19:00.893 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.snA 00:19:00.893 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.893 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.893 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.893 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.snA 00:19:00.893 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.snA 00:19:01.151 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:01.151 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Cy2 00:19:01.151 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.151 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.151 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.151 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Cy2 00:19:01.151 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Cy2 00:19:01.410 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:01.410 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:01.410 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:01.410 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:01.410 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:01.410 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:01.669 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:01.669 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:01.669 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:01.669 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:01.669 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:01.669 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.669 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.669 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.669 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.669 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.669 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.669 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.669 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.238 00:19:02.238 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:02.238 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:02.238 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.238 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.238 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.238 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.238 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.238 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.238 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:02.238 { 00:19:02.238 "cntlid": 1, 00:19:02.238 "qid": 0, 00:19:02.238 "state": "enabled", 00:19:02.238 "thread": "nvmf_tgt_poll_group_000", 00:19:02.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:02.238 "listen_address": { 00:19:02.238 "trtype": "TCP", 00:19:02.238 "adrfam": "IPv4", 00:19:02.238 "traddr": "10.0.0.2", 00:19:02.238 "trsvcid": "4420" 00:19:02.238 }, 00:19:02.238 "peer_address": { 00:19:02.238 "trtype": "TCP", 00:19:02.238 "adrfam": "IPv4", 00:19:02.238 "traddr": "10.0.0.1", 00:19:02.238 "trsvcid": "45792" 00:19:02.238 }, 00:19:02.238 "auth": { 00:19:02.238 "state": "completed", 00:19:02.238 "digest": "sha256", 00:19:02.238 "dhgroup": "null" 00:19:02.238 } 00:19:02.238 } 00:19:02.238 ]' 00:19:02.238 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:02.496 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:02.496 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:02.496 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:02.496 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:02.496 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.496 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.496 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.757 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:19:02.757 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:19:03.692 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.692 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:03.692 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.692 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.692 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.692 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:03.692 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:03.692 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:03.950 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:03.950 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:03.950 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:03.950 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:03.950 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:03.950 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.950 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.950 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.950 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.950 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.950 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.950 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.950 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.516 00:19:04.516 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:04.516 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:04.516 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.775 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.775 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.775 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.775 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.775 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.775 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.775 { 00:19:04.775 "cntlid": 3, 00:19:04.775 "qid": 0, 00:19:04.775 "state": "enabled", 00:19:04.775 "thread": "nvmf_tgt_poll_group_000", 00:19:04.775 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:04.775 "listen_address": { 00:19:04.775 "trtype": "TCP", 00:19:04.775 "adrfam": "IPv4", 00:19:04.775 "traddr": "10.0.0.2", 00:19:04.775 "trsvcid": "4420" 00:19:04.775 }, 00:19:04.775 "peer_address": { 00:19:04.775 "trtype": "TCP", 00:19:04.775 "adrfam": "IPv4", 00:19:04.775 "traddr": "10.0.0.1", 00:19:04.775 "trsvcid": "55914" 00:19:04.775 }, 00:19:04.775 "auth": { 00:19:04.775 "state": "completed", 00:19:04.775 "digest": "sha256", 00:19:04.775 "dhgroup": "null" 00:19:04.775 } 00:19:04.775 } 00:19:04.775 ]' 00:19:04.775 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.775 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:04.775 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.775 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:04.775 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.775 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.775 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.775 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.034 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: --dhchap-ctrl-secret DHHC-1:02:YWZiYzQwZThiZjZhNGQ0NDJkMjMyNzUxMzMyMmM1NTcyOGQ4OGJjZDgxZWM1NzBmtzwEkQ==: 00:19:05.034 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: --dhchap-ctrl-secret DHHC-1:02:YWZiYzQwZThiZjZhNGQ0NDJkMjMyNzUxMzMyMmM1NTcyOGQ4OGJjZDgxZWM1NzBmtzwEkQ==: 00:19:06.410 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.410 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:06.410 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.410 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.410 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.410 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:06.410 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:06.410 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:06.410 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:06.410 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:06.410 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:06.410 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:06.410 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:06.410 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.411 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.411 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.411 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.411 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.411 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.411 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.411 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.669 00:19:06.929 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.929 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.929 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:07.188 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.188 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.188 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.188 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.188 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.188 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:07.188 { 00:19:07.188 "cntlid": 5, 00:19:07.188 "qid": 0, 00:19:07.188 "state": "enabled", 00:19:07.188 "thread": "nvmf_tgt_poll_group_000", 00:19:07.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:07.188 "listen_address": { 00:19:07.188 "trtype": "TCP", 00:19:07.188 "adrfam": "IPv4", 00:19:07.188 "traddr": "10.0.0.2", 00:19:07.188 "trsvcid": "4420" 00:19:07.188 }, 00:19:07.188 "peer_address": { 00:19:07.188 "trtype": "TCP", 00:19:07.188 "adrfam": "IPv4", 00:19:07.188 "traddr": "10.0.0.1", 00:19:07.188 "trsvcid": "55956" 00:19:07.188 }, 00:19:07.188 "auth": { 00:19:07.188 "state": "completed", 00:19:07.188 "digest": "sha256", 00:19:07.188 "dhgroup": "null" 00:19:07.188 } 00:19:07.188 } 00:19:07.188 ]' 00:19:07.188 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:07.188 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:07.188 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:07.188 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:07.188 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:07.188 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.188 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.188 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.447 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:01:NDM3MjAyMTY5ZGZmOTQ4ZmZjYjQyMDc5NGU2NGRiNWO9PzHW: 00:19:07.447 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:01:NDM3MjAyMTY5ZGZmOTQ4ZmZjYjQyMDc5NGU2NGRiNWO9PzHW: 00:19:08.383 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.383 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:08.383 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.383 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.383 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.383 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:08.383 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:08.383 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:08.642 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:08.642 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:08.642 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:08.642 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:08.642 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:08.642 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.642 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:08.642 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.642 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.642 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.642 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:08.642 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:08.642 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:09.208 00:19:09.208 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:09.208 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:09.208 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.467 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.467 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.467 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.467 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.467 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.467 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:09.467 { 00:19:09.467 "cntlid": 7, 00:19:09.467 "qid": 0, 00:19:09.467 "state": "enabled", 00:19:09.467 "thread": "nvmf_tgt_poll_group_000", 00:19:09.467 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:09.467 "listen_address": { 00:19:09.467 "trtype": "TCP", 00:19:09.467 "adrfam": "IPv4", 00:19:09.467 "traddr": "10.0.0.2", 00:19:09.467 "trsvcid": "4420" 00:19:09.467 }, 00:19:09.467 "peer_address": { 00:19:09.467 "trtype": "TCP", 00:19:09.467 "adrfam": "IPv4", 00:19:09.467 "traddr": "10.0.0.1", 00:19:09.467 "trsvcid": "55982" 00:19:09.467 }, 00:19:09.467 "auth": { 00:19:09.467 "state": "completed", 00:19:09.467 "digest": "sha256", 00:19:09.467 "dhgroup": "null" 00:19:09.467 } 00:19:09.467 } 00:19:09.467 ]' 00:19:09.467 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:09.467 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:09.467 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:09.467 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:09.467 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:09.467 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.467 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.467 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.727 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:19:09.727 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:19:10.666 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.666 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.666 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.666 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.924 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.924 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:10.924 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:10.924 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:10.924 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:11.182 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:11.182 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:11.182 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:11.182 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:11.182 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:11.182 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.182 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.182 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.182 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.182 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.182 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.182 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.182 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.440 00:19:11.441 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:11.441 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:11.441 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.698 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.698 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.698 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.698 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.698 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.698 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:11.698 { 00:19:11.698 "cntlid": 9, 00:19:11.698 "qid": 0, 00:19:11.698 "state": "enabled", 00:19:11.698 "thread": "nvmf_tgt_poll_group_000", 00:19:11.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:11.698 "listen_address": { 00:19:11.698 "trtype": "TCP", 00:19:11.698 "adrfam": "IPv4", 00:19:11.698 "traddr": "10.0.0.2", 00:19:11.698 "trsvcid": "4420" 00:19:11.698 }, 00:19:11.698 "peer_address": { 00:19:11.698 "trtype": "TCP", 00:19:11.698 "adrfam": "IPv4", 00:19:11.698 "traddr": "10.0.0.1", 00:19:11.698 "trsvcid": "56006" 00:19:11.698 }, 00:19:11.698 "auth": { 00:19:11.698 "state": "completed", 00:19:11.698 "digest": "sha256", 00:19:11.698 "dhgroup": "ffdhe2048" 00:19:11.698 } 00:19:11.698 } 00:19:11.698 ]' 00:19:11.698 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:11.698 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:11.698 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:11.956 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:11.956 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:11.956 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.956 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.956 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.215 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:19:12.215 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:19:13.151 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.151 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:13.151 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.151 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.151 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.151 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:13.151 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:13.151 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:13.410 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:13.410 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:13.410 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:13.410 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:13.410 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:13.410 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.410 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.410 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.410 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.410 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.410 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.410 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.410 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.668 00:19:13.668 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:13.668 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:13.668 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.951 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.951 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.951 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.951 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.237 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.237 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:14.237 { 00:19:14.237 "cntlid": 11, 00:19:14.237 "qid": 0, 00:19:14.237 "state": "enabled", 00:19:14.237 "thread": "nvmf_tgt_poll_group_000", 00:19:14.237 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:14.237 "listen_address": { 00:19:14.237 "trtype": "TCP", 00:19:14.237 "adrfam": "IPv4", 00:19:14.237 "traddr": "10.0.0.2", 00:19:14.237 "trsvcid": "4420" 00:19:14.237 }, 00:19:14.237 "peer_address": { 00:19:14.237 "trtype": "TCP", 00:19:14.237 "adrfam": "IPv4", 00:19:14.237 "traddr": "10.0.0.1", 00:19:14.237 "trsvcid": "56028" 00:19:14.237 }, 00:19:14.237 "auth": { 00:19:14.237 "state": "completed", 00:19:14.237 "digest": "sha256", 00:19:14.237 "dhgroup": "ffdhe2048" 00:19:14.237 } 00:19:14.237 } 00:19:14.237 ]' 00:19:14.237 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:14.237 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:14.237 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:14.237 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:14.237 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:14.237 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.237 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.237 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.495 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: --dhchap-ctrl-secret DHHC-1:02:YWZiYzQwZThiZjZhNGQ0NDJkMjMyNzUxMzMyMmM1NTcyOGQ4OGJjZDgxZWM1NzBmtzwEkQ==: 00:19:14.495 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: --dhchap-ctrl-secret DHHC-1:02:YWZiYzQwZThiZjZhNGQ0NDJkMjMyNzUxMzMyMmM1NTcyOGQ4OGJjZDgxZWM1NzBmtzwEkQ==: 00:19:15.434 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.434 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:15.434 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.434 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.434 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.434 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:15.434 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:15.434 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:15.692 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:15.692 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:15.692 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:15.692 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:15.692 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:15.692 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.692 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.692 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.692 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.692 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.692 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.692 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.692 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.260 00:19:16.260 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:16.260 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:16.260 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.260 11:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.260 11:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.260 11:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.260 11:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.260 11:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.260 11:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:16.260 { 00:19:16.260 "cntlid": 13, 00:19:16.260 "qid": 0, 00:19:16.260 "state": "enabled", 00:19:16.260 "thread": "nvmf_tgt_poll_group_000", 00:19:16.260 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:16.260 "listen_address": { 00:19:16.260 "trtype": "TCP", 00:19:16.260 "adrfam": "IPv4", 00:19:16.260 "traddr": "10.0.0.2", 00:19:16.260 "trsvcid": "4420" 00:19:16.260 }, 00:19:16.260 "peer_address": { 00:19:16.260 "trtype": "TCP", 00:19:16.260 "adrfam": "IPv4", 00:19:16.260 "traddr": "10.0.0.1", 00:19:16.260 "trsvcid": "53046" 00:19:16.260 }, 00:19:16.260 "auth": { 00:19:16.261 "state": "completed", 00:19:16.261 "digest": "sha256", 00:19:16.261 "dhgroup": "ffdhe2048" 00:19:16.261 } 00:19:16.261 } 00:19:16.261 ]' 00:19:16.261 11:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:16.518 11:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:16.518 11:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:16.518 11:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:16.519 11:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:16.519 11:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.519 11:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.519 11:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.777 11:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:01:NDM3MjAyMTY5ZGZmOTQ4ZmZjYjQyMDc5NGU2NGRiNWO9PzHW: 00:19:16.777 11:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:01:NDM3MjAyMTY5ZGZmOTQ4ZmZjYjQyMDc5NGU2NGRiNWO9PzHW: 00:19:17.714 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.714 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:17.714 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.714 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.714 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.714 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:17.714 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:17.714 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:17.972 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:17.972 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:17.972 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:17.972 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:17.972 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:17.972 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.973 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:17.973 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.973 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.973 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.973 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:17.973 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:17.973 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:18.539 00:19:18.539 11:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:18.539 11:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:18.539 11:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.797 11:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.797 11:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.797 11:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.797 11:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.797 11:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.797 11:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:18.797 { 00:19:18.797 "cntlid": 15, 00:19:18.797 "qid": 0, 00:19:18.797 "state": "enabled", 00:19:18.797 "thread": "nvmf_tgt_poll_group_000", 00:19:18.797 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:18.797 "listen_address": { 00:19:18.797 "trtype": "TCP", 00:19:18.797 "adrfam": "IPv4", 00:19:18.797 "traddr": "10.0.0.2", 00:19:18.797 "trsvcid": "4420" 00:19:18.797 }, 00:19:18.797 "peer_address": { 00:19:18.797 "trtype": "TCP", 00:19:18.797 "adrfam": "IPv4", 00:19:18.797 "traddr": "10.0.0.1", 00:19:18.797 "trsvcid": "53062" 00:19:18.797 }, 00:19:18.797 "auth": { 00:19:18.797 "state": "completed", 00:19:18.797 "digest": "sha256", 00:19:18.797 "dhgroup": "ffdhe2048" 00:19:18.797 } 00:19:18.797 } 00:19:18.797 ]' 00:19:18.797 11:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:18.797 11:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:18.797 11:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:18.797 11:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:18.797 11:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:18.797 11:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.797 11:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.798 11:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.056 11:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:19:19.056 11:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:19:19.990 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.990 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:19.990 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.990 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.990 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.990 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:19.990 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:19.990 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:19.990 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:20.248 11:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:20.248 11:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:20.248 11:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:20.248 11:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:20.248 11:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:20.248 11:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.248 11:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.248 11:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.248 11:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.248 11:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.248 11:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.248 11:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.248 11:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.816 00:19:20.816 11:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:20.816 11:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:20.816 11:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.074 11:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.074 11:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.074 11:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.074 11:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.074 11:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.074 11:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:21.074 { 00:19:21.074 "cntlid": 17, 00:19:21.074 "qid": 0, 00:19:21.074 "state": "enabled", 00:19:21.074 "thread": "nvmf_tgt_poll_group_000", 00:19:21.074 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:21.074 "listen_address": { 00:19:21.074 "trtype": "TCP", 00:19:21.074 "adrfam": "IPv4", 00:19:21.074 "traddr": "10.0.0.2", 00:19:21.074 "trsvcid": "4420" 00:19:21.074 }, 00:19:21.074 "peer_address": { 00:19:21.074 "trtype": "TCP", 00:19:21.074 "adrfam": "IPv4", 00:19:21.074 "traddr": "10.0.0.1", 00:19:21.074 "trsvcid": "53098" 00:19:21.074 }, 00:19:21.074 "auth": { 00:19:21.074 "state": "completed", 00:19:21.074 "digest": "sha256", 00:19:21.074 "dhgroup": "ffdhe3072" 00:19:21.074 } 00:19:21.074 } 00:19:21.074 ]' 00:19:21.074 11:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:21.074 11:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:21.074 11:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:21.074 11:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:21.074 11:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:21.074 11:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.074 11:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.074 11:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.332 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:19:21.332 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:19:22.267 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.267 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:22.267 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.267 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.267 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.267 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:22.267 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:22.267 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:22.833 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:22.833 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:22.833 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:22.833 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:22.833 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:22.833 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.833 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.833 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.833 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.833 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.833 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.833 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.833 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.091 00:19:23.091 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:23.091 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.091 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:23.349 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.349 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.349 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.349 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.349 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.349 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:23.349 { 00:19:23.349 "cntlid": 19, 00:19:23.349 "qid": 0, 00:19:23.349 "state": "enabled", 00:19:23.349 "thread": "nvmf_tgt_poll_group_000", 00:19:23.349 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:23.349 "listen_address": { 00:19:23.349 "trtype": "TCP", 00:19:23.349 "adrfam": "IPv4", 00:19:23.349 "traddr": "10.0.0.2", 00:19:23.349 "trsvcid": "4420" 00:19:23.349 }, 00:19:23.349 "peer_address": { 00:19:23.349 "trtype": "TCP", 00:19:23.349 "adrfam": "IPv4", 00:19:23.349 "traddr": "10.0.0.1", 00:19:23.349 "trsvcid": "53124" 00:19:23.349 }, 00:19:23.349 "auth": { 00:19:23.349 "state": "completed", 00:19:23.349 "digest": "sha256", 00:19:23.349 "dhgroup": "ffdhe3072" 00:19:23.349 } 00:19:23.349 } 00:19:23.349 ]' 00:19:23.349 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:23.349 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.349 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:23.349 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:23.349 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:23.349 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.349 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.349 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.917 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: --dhchap-ctrl-secret DHHC-1:02:YWZiYzQwZThiZjZhNGQ0NDJkMjMyNzUxMzMyMmM1NTcyOGQ4OGJjZDgxZWM1NzBmtzwEkQ==: 00:19:23.917 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: --dhchap-ctrl-secret DHHC-1:02:YWZiYzQwZThiZjZhNGQ0NDJkMjMyNzUxMzMyMmM1NTcyOGQ4OGJjZDgxZWM1NzBmtzwEkQ==: 00:19:24.854 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.855 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:24.855 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.855 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.855 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.855 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:24.855 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:24.855 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:25.112 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:25.112 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:25.112 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:25.112 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:25.112 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:25.112 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.112 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.112 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.112 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.112 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.112 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.112 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.112 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.369 00:19:25.370 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:25.370 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:25.370 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.631 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.631 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.631 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.631 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.631 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.631 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:25.631 { 00:19:25.631 "cntlid": 21, 00:19:25.631 "qid": 0, 00:19:25.631 "state": "enabled", 00:19:25.631 "thread": "nvmf_tgt_poll_group_000", 00:19:25.631 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:25.631 "listen_address": { 00:19:25.631 "trtype": "TCP", 00:19:25.631 "adrfam": "IPv4", 00:19:25.631 "traddr": "10.0.0.2", 00:19:25.631 "trsvcid": "4420" 00:19:25.631 }, 00:19:25.631 "peer_address": { 00:19:25.631 "trtype": "TCP", 00:19:25.631 "adrfam": "IPv4", 00:19:25.631 "traddr": "10.0.0.1", 00:19:25.631 "trsvcid": "52026" 00:19:25.631 }, 00:19:25.631 "auth": { 00:19:25.631 "state": "completed", 00:19:25.631 "digest": "sha256", 00:19:25.631 "dhgroup": "ffdhe3072" 00:19:25.631 } 00:19:25.631 } 00:19:25.631 ]' 00:19:25.631 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:25.631 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:25.631 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:25.631 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:25.631 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:25.889 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.889 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.889 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.147 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:01:NDM3MjAyMTY5ZGZmOTQ4ZmZjYjQyMDc5NGU2NGRiNWO9PzHW: 00:19:26.147 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:01:NDM3MjAyMTY5ZGZmOTQ4ZmZjYjQyMDc5NGU2NGRiNWO9PzHW: 00:19:27.083 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.083 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.083 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.083 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.083 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.083 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:27.083 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:27.084 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:27.342 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:27.342 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:27.342 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:27.342 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:27.342 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:27.342 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.342 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:27.342 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.342 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.342 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.342 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:27.342 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:27.342 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:27.600 00:19:27.600 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:27.600 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:27.600 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.858 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.858 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.858 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.858 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.858 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.858 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:27.858 { 00:19:27.858 "cntlid": 23, 00:19:27.858 "qid": 0, 00:19:27.858 "state": "enabled", 00:19:27.858 "thread": "nvmf_tgt_poll_group_000", 00:19:27.858 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:27.858 "listen_address": { 00:19:27.858 "trtype": "TCP", 00:19:27.858 "adrfam": "IPv4", 00:19:27.858 "traddr": "10.0.0.2", 00:19:27.858 "trsvcid": "4420" 00:19:27.858 }, 00:19:27.858 "peer_address": { 00:19:27.858 "trtype": "TCP", 00:19:27.858 "adrfam": "IPv4", 00:19:27.858 "traddr": "10.0.0.1", 00:19:27.858 "trsvcid": "52054" 00:19:27.858 }, 00:19:27.858 "auth": { 00:19:27.858 "state": "completed", 00:19:27.858 "digest": "sha256", 00:19:27.858 "dhgroup": "ffdhe3072" 00:19:27.858 } 00:19:27.858 } 00:19:27.858 ]' 00:19:27.858 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:27.859 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:27.859 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:28.117 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:28.117 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:28.117 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.117 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.117 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.374 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:19:28.374 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:19:29.312 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.312 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:29.312 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.312 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.312 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.312 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:29.312 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.312 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:29.312 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:29.570 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:29.570 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:29.570 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:29.570 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:29.570 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:29.570 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.570 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.570 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.570 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.570 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.570 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.570 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.570 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.139 00:19:30.139 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:30.139 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:30.139 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.397 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.397 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.397 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.397 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.397 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.397 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:30.397 { 00:19:30.397 "cntlid": 25, 00:19:30.397 "qid": 0, 00:19:30.397 "state": "enabled", 00:19:30.397 "thread": "nvmf_tgt_poll_group_000", 00:19:30.397 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:30.397 "listen_address": { 00:19:30.397 "trtype": "TCP", 00:19:30.397 "adrfam": "IPv4", 00:19:30.397 "traddr": "10.0.0.2", 00:19:30.397 "trsvcid": "4420" 00:19:30.397 }, 00:19:30.397 "peer_address": { 00:19:30.397 "trtype": "TCP", 00:19:30.397 "adrfam": "IPv4", 00:19:30.397 "traddr": "10.0.0.1", 00:19:30.397 "trsvcid": "52078" 00:19:30.397 }, 00:19:30.397 "auth": { 00:19:30.397 "state": "completed", 00:19:30.397 "digest": "sha256", 00:19:30.397 "dhgroup": "ffdhe4096" 00:19:30.397 } 00:19:30.397 } 00:19:30.397 ]' 00:19:30.397 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:30.397 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:30.397 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:30.397 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:30.397 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:30.397 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.397 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.397 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.656 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:19:30.656 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:19:32.032 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.032 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:32.032 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.032 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.032 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.032 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:32.032 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:32.032 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:32.032 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:32.032 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.032 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:32.032 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:32.032 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:32.032 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.032 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.032 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.032 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.032 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.032 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.032 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.032 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.601 00:19:32.601 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:32.601 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:32.601 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.601 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.601 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.601 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.601 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.601 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.860 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:32.860 { 00:19:32.860 "cntlid": 27, 00:19:32.860 "qid": 0, 00:19:32.860 "state": "enabled", 00:19:32.860 "thread": "nvmf_tgt_poll_group_000", 00:19:32.860 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:32.860 "listen_address": { 00:19:32.860 "trtype": "TCP", 00:19:32.860 "adrfam": "IPv4", 00:19:32.860 "traddr": "10.0.0.2", 00:19:32.860 "trsvcid": "4420" 00:19:32.860 }, 00:19:32.860 "peer_address": { 00:19:32.860 "trtype": "TCP", 00:19:32.860 "adrfam": "IPv4", 00:19:32.860 "traddr": "10.0.0.1", 00:19:32.860 "trsvcid": "52096" 00:19:32.860 }, 00:19:32.860 "auth": { 00:19:32.860 "state": "completed", 00:19:32.860 "digest": "sha256", 00:19:32.860 "dhgroup": "ffdhe4096" 00:19:32.860 } 00:19:32.860 } 00:19:32.860 ]' 00:19:32.860 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:32.860 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:32.860 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:32.860 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:32.860 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:32.860 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.860 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.860 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.118 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: --dhchap-ctrl-secret DHHC-1:02:YWZiYzQwZThiZjZhNGQ0NDJkMjMyNzUxMzMyMmM1NTcyOGQ4OGJjZDgxZWM1NzBmtzwEkQ==: 00:19:33.118 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: --dhchap-ctrl-secret DHHC-1:02:YWZiYzQwZThiZjZhNGQ0NDJkMjMyNzUxMzMyMmM1NTcyOGQ4OGJjZDgxZWM1NzBmtzwEkQ==: 00:19:34.054 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.054 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.054 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.054 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.054 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.054 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.054 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:34.054 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:34.620 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:34.620 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:34.620 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:34.620 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:34.620 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:34.620 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.620 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.620 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.620 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.620 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.620 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.620 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.620 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.878 00:19:34.878 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:34.878 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:34.879 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.136 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.136 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.136 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.136 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.136 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.136 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.136 { 00:19:35.136 "cntlid": 29, 00:19:35.136 "qid": 0, 00:19:35.136 "state": "enabled", 00:19:35.136 "thread": "nvmf_tgt_poll_group_000", 00:19:35.137 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:35.137 "listen_address": { 00:19:35.137 "trtype": "TCP", 00:19:35.137 "adrfam": "IPv4", 00:19:35.137 "traddr": "10.0.0.2", 00:19:35.137 "trsvcid": "4420" 00:19:35.137 }, 00:19:35.137 "peer_address": { 00:19:35.137 "trtype": "TCP", 00:19:35.137 "adrfam": "IPv4", 00:19:35.137 "traddr": "10.0.0.1", 00:19:35.137 "trsvcid": "57564" 00:19:35.137 }, 00:19:35.137 "auth": { 00:19:35.137 "state": "completed", 00:19:35.137 "digest": "sha256", 00:19:35.137 "dhgroup": "ffdhe4096" 00:19:35.137 } 00:19:35.137 } 00:19:35.137 ]' 00:19:35.137 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.137 11:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:35.137 11:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.394 11:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:35.394 11:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:35.394 11:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.394 11:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.394 11:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.652 11:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:01:NDM3MjAyMTY5ZGZmOTQ4ZmZjYjQyMDc5NGU2NGRiNWO9PzHW: 00:19:35.652 11:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:01:NDM3MjAyMTY5ZGZmOTQ4ZmZjYjQyMDc5NGU2NGRiNWO9PzHW: 00:19:36.588 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.588 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:36.588 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.588 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.588 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.588 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:36.588 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:36.588 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:36.846 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:36.846 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:36.846 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:36.846 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:36.846 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:36.846 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.846 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:36.846 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.846 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.846 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.846 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:36.846 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:36.846 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:37.414 00:19:37.414 11:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.414 11:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.414 11:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.414 11:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.414 11:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.414 11:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.414 11:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.672 11:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.672 11:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:37.672 { 00:19:37.672 "cntlid": 31, 00:19:37.672 "qid": 0, 00:19:37.672 "state": "enabled", 00:19:37.672 "thread": "nvmf_tgt_poll_group_000", 00:19:37.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:37.672 "listen_address": { 00:19:37.672 "trtype": "TCP", 00:19:37.672 "adrfam": "IPv4", 00:19:37.672 "traddr": "10.0.0.2", 00:19:37.672 "trsvcid": "4420" 00:19:37.672 }, 00:19:37.672 "peer_address": { 00:19:37.672 "trtype": "TCP", 00:19:37.672 "adrfam": "IPv4", 00:19:37.672 "traddr": "10.0.0.1", 00:19:37.672 "trsvcid": "57594" 00:19:37.672 }, 00:19:37.672 "auth": { 00:19:37.672 "state": "completed", 00:19:37.672 "digest": "sha256", 00:19:37.672 "dhgroup": "ffdhe4096" 00:19:37.672 } 00:19:37.672 } 00:19:37.672 ]' 00:19:37.673 11:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:37.673 11:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:37.673 11:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:37.673 11:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:37.673 11:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:37.673 11:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.673 11:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.673 11:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.930 11:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:19:37.930 11:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:19:38.864 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.864 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:38.864 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.864 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.864 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.864 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:38.864 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:38.864 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:38.864 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:39.121 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:39.121 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.121 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:39.121 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:39.121 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:39.121 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.121 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.122 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.122 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.122 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.122 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.122 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.122 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.689 00:19:39.689 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:39.689 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:39.689 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.947 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.947 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.947 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.947 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.947 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.947 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:39.947 { 00:19:39.947 "cntlid": 33, 00:19:39.947 "qid": 0, 00:19:39.947 "state": "enabled", 00:19:39.947 "thread": "nvmf_tgt_poll_group_000", 00:19:39.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:39.947 "listen_address": { 00:19:39.947 "trtype": "TCP", 00:19:39.947 "adrfam": "IPv4", 00:19:39.947 "traddr": "10.0.0.2", 00:19:39.947 "trsvcid": "4420" 00:19:39.947 }, 00:19:39.947 "peer_address": { 00:19:39.947 "trtype": "TCP", 00:19:39.947 "adrfam": "IPv4", 00:19:39.947 "traddr": "10.0.0.1", 00:19:39.947 "trsvcid": "57630" 00:19:39.947 }, 00:19:39.947 "auth": { 00:19:39.947 "state": "completed", 00:19:39.947 "digest": "sha256", 00:19:39.947 "dhgroup": "ffdhe6144" 00:19:39.947 } 00:19:39.947 } 00:19:39.947 ]' 00:19:39.947 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.205 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.205 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.205 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:40.205 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.205 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.205 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.205 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.463 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:19:40.463 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:19:41.399 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.399 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.399 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.399 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.399 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.399 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.399 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:41.399 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:41.656 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:41.656 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.656 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:41.656 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:41.656 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:41.656 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.656 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.656 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.656 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.656 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.656 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.656 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.656 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.587 00:19:42.587 11:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.587 11:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.587 11:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.587 11:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.587 11:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.587 11:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.587 11:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.587 11:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.587 11:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.587 { 00:19:42.587 "cntlid": 35, 00:19:42.587 "qid": 0, 00:19:42.587 "state": "enabled", 00:19:42.587 "thread": "nvmf_tgt_poll_group_000", 00:19:42.587 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:42.587 "listen_address": { 00:19:42.587 "trtype": "TCP", 00:19:42.587 "adrfam": "IPv4", 00:19:42.587 "traddr": "10.0.0.2", 00:19:42.587 "trsvcid": "4420" 00:19:42.587 }, 00:19:42.587 "peer_address": { 00:19:42.587 "trtype": "TCP", 00:19:42.587 "adrfam": "IPv4", 00:19:42.587 "traddr": "10.0.0.1", 00:19:42.587 "trsvcid": "57656" 00:19:42.587 }, 00:19:42.587 "auth": { 00:19:42.587 "state": "completed", 00:19:42.587 "digest": "sha256", 00:19:42.587 "dhgroup": "ffdhe6144" 00:19:42.587 } 00:19:42.587 } 00:19:42.587 ]' 00:19:42.587 11:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.845 11:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.845 11:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.845 11:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:42.845 11:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.845 11:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.845 11:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.845 11:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.101 11:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: --dhchap-ctrl-secret DHHC-1:02:YWZiYzQwZThiZjZhNGQ0NDJkMjMyNzUxMzMyMmM1NTcyOGQ4OGJjZDgxZWM1NzBmtzwEkQ==: 00:19:43.101 11:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: --dhchap-ctrl-secret DHHC-1:02:YWZiYzQwZThiZjZhNGQ0NDJkMjMyNzUxMzMyMmM1NTcyOGQ4OGJjZDgxZWM1NzBmtzwEkQ==: 00:19:44.036 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.036 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:44.036 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.036 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.036 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.036 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.036 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:44.036 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:44.294 11:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:44.294 11:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.294 11:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:44.294 11:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:44.294 11:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:44.294 11:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.294 11:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.294 11:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.294 11:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.294 11:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.294 11:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.294 11:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.294 11:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.926 00:19:44.926 11:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.927 11:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:44.927 11:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.185 11:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.185 11:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.185 11:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.185 11:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.185 11:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.185 11:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.185 { 00:19:45.185 "cntlid": 37, 00:19:45.185 "qid": 0, 00:19:45.185 "state": "enabled", 00:19:45.185 "thread": "nvmf_tgt_poll_group_000", 00:19:45.185 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:45.185 "listen_address": { 00:19:45.185 "trtype": "TCP", 00:19:45.185 "adrfam": "IPv4", 00:19:45.185 "traddr": "10.0.0.2", 00:19:45.185 "trsvcid": "4420" 00:19:45.185 }, 00:19:45.185 "peer_address": { 00:19:45.185 "trtype": "TCP", 00:19:45.185 "adrfam": "IPv4", 00:19:45.185 "traddr": "10.0.0.1", 00:19:45.185 "trsvcid": "35206" 00:19:45.185 }, 00:19:45.185 "auth": { 00:19:45.185 "state": "completed", 00:19:45.185 "digest": "sha256", 00:19:45.185 "dhgroup": "ffdhe6144" 00:19:45.185 } 00:19:45.185 } 00:19:45.185 ]' 00:19:45.185 11:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.185 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.185 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.185 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:45.185 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.444 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.444 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.444 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.703 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:01:NDM3MjAyMTY5ZGZmOTQ4ZmZjYjQyMDc5NGU2NGRiNWO9PzHW: 00:19:45.703 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:01:NDM3MjAyMTY5ZGZmOTQ4ZmZjYjQyMDc5NGU2NGRiNWO9PzHW: 00:19:46.638 11:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.638 11:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:46.638 11:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.638 11:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.638 11:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.638 11:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.638 11:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:46.638 11:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:46.897 11:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:46.897 11:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.897 11:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:46.897 11:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:46.897 11:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:46.897 11:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.897 11:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:46.897 11:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.897 11:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.897 11:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.897 11:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:46.897 11:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:46.897 11:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:47.466 00:19:47.466 11:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:47.466 11:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:47.466 11:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.724 11:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.724 11:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.724 11:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.724 11:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.724 11:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.724 11:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.724 { 00:19:47.724 "cntlid": 39, 00:19:47.724 "qid": 0, 00:19:47.724 "state": "enabled", 00:19:47.724 "thread": "nvmf_tgt_poll_group_000", 00:19:47.724 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:47.724 "listen_address": { 00:19:47.724 "trtype": "TCP", 00:19:47.724 "adrfam": "IPv4", 00:19:47.724 "traddr": "10.0.0.2", 00:19:47.724 "trsvcid": "4420" 00:19:47.724 }, 00:19:47.724 "peer_address": { 00:19:47.724 "trtype": "TCP", 00:19:47.724 "adrfam": "IPv4", 00:19:47.724 "traddr": "10.0.0.1", 00:19:47.724 "trsvcid": "35232" 00:19:47.724 }, 00:19:47.724 "auth": { 00:19:47.724 "state": "completed", 00:19:47.725 "digest": "sha256", 00:19:47.725 "dhgroup": "ffdhe6144" 00:19:47.725 } 00:19:47.725 } 00:19:47.725 ]' 00:19:47.725 11:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.725 11:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.725 11:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.725 11:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:47.725 11:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.984 11:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.984 11:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.984 11:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.242 11:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:19:48.242 11:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:19:49.178 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.178 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.178 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:49.178 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.178 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.178 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.178 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:49.178 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:49.178 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:49.179 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:49.436 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:19:49.436 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:49.436 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:49.436 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:49.436 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:49.436 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.436 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.436 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.436 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.436 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.436 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.436 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.436 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.368 00:19:50.368 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.368 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.368 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.627 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.627 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.627 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.627 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.627 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.627 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.627 { 00:19:50.627 "cntlid": 41, 00:19:50.627 "qid": 0, 00:19:50.627 "state": "enabled", 00:19:50.627 "thread": "nvmf_tgt_poll_group_000", 00:19:50.627 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:50.627 "listen_address": { 00:19:50.627 "trtype": "TCP", 00:19:50.627 "adrfam": "IPv4", 00:19:50.627 "traddr": "10.0.0.2", 00:19:50.627 "trsvcid": "4420" 00:19:50.627 }, 00:19:50.627 "peer_address": { 00:19:50.627 "trtype": "TCP", 00:19:50.627 "adrfam": "IPv4", 00:19:50.627 "traddr": "10.0.0.1", 00:19:50.627 "trsvcid": "35266" 00:19:50.627 }, 00:19:50.627 "auth": { 00:19:50.627 "state": "completed", 00:19:50.627 "digest": "sha256", 00:19:50.627 "dhgroup": "ffdhe8192" 00:19:50.627 } 00:19:50.627 } 00:19:50.627 ]' 00:19:50.627 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.627 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.627 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.627 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:50.627 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.627 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.627 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.627 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.887 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:19:50.887 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:19:51.827 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.827 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:51.827 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.827 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.827 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.827 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:51.827 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:51.827 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:52.394 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:19:52.394 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.394 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:52.394 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:52.394 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:52.394 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.394 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.394 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.394 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.394 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.394 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.394 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.394 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.333 00:19:53.333 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.333 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.333 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.591 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.591 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.591 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.591 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.591 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.591 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.591 { 00:19:53.591 "cntlid": 43, 00:19:53.591 "qid": 0, 00:19:53.591 "state": "enabled", 00:19:53.591 "thread": "nvmf_tgt_poll_group_000", 00:19:53.591 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:53.591 "listen_address": { 00:19:53.591 "trtype": "TCP", 00:19:53.591 "adrfam": "IPv4", 00:19:53.591 "traddr": "10.0.0.2", 00:19:53.591 "trsvcid": "4420" 00:19:53.591 }, 00:19:53.591 "peer_address": { 00:19:53.591 "trtype": "TCP", 00:19:53.591 "adrfam": "IPv4", 00:19:53.591 "traddr": "10.0.0.1", 00:19:53.591 "trsvcid": "35302" 00:19:53.591 }, 00:19:53.591 "auth": { 00:19:53.591 "state": "completed", 00:19:53.591 "digest": "sha256", 00:19:53.591 "dhgroup": "ffdhe8192" 00:19:53.591 } 00:19:53.591 } 00:19:53.591 ]' 00:19:53.591 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.591 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.591 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.591 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:53.591 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.591 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.591 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.591 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.851 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: --dhchap-ctrl-secret DHHC-1:02:YWZiYzQwZThiZjZhNGQ0NDJkMjMyNzUxMzMyMmM1NTcyOGQ4OGJjZDgxZWM1NzBmtzwEkQ==: 00:19:53.851 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: --dhchap-ctrl-secret DHHC-1:02:YWZiYzQwZThiZjZhNGQ0NDJkMjMyNzUxMzMyMmM1NTcyOGQ4OGJjZDgxZWM1NzBmtzwEkQ==: 00:19:55.229 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.229 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:55.229 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.229 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.229 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.229 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.229 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:55.229 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:55.229 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:19:55.229 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.229 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:55.229 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:55.229 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:55.229 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.229 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.229 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.229 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.229 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.229 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.229 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.229 11:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.166 00:19:56.166 11:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.166 11:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.166 11:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.424 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.424 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.424 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.424 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.424 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.424 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.424 { 00:19:56.424 "cntlid": 45, 00:19:56.424 "qid": 0, 00:19:56.424 "state": "enabled", 00:19:56.424 "thread": "nvmf_tgt_poll_group_000", 00:19:56.424 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:56.424 "listen_address": { 00:19:56.424 "trtype": "TCP", 00:19:56.424 "adrfam": "IPv4", 00:19:56.424 "traddr": "10.0.0.2", 00:19:56.424 "trsvcid": "4420" 00:19:56.424 }, 00:19:56.424 "peer_address": { 00:19:56.424 "trtype": "TCP", 00:19:56.424 "adrfam": "IPv4", 00:19:56.424 "traddr": "10.0.0.1", 00:19:56.424 "trsvcid": "45382" 00:19:56.424 }, 00:19:56.424 "auth": { 00:19:56.424 "state": "completed", 00:19:56.424 "digest": "sha256", 00:19:56.424 "dhgroup": "ffdhe8192" 00:19:56.424 } 00:19:56.424 } 00:19:56.424 ]' 00:19:56.424 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.424 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.424 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.424 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:56.424 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.424 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.424 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.424 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.684 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:01:NDM3MjAyMTY5ZGZmOTQ4ZmZjYjQyMDc5NGU2NGRiNWO9PzHW: 00:19:56.684 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:01:NDM3MjAyMTY5ZGZmOTQ4ZmZjYjQyMDc5NGU2NGRiNWO9PzHW: 00:19:58.064 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.064 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:58.064 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.065 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.065 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.065 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:58.065 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:58.065 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:58.065 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:19:58.065 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.065 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:58.065 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:58.065 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:58.065 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.065 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:58.065 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.065 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.065 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.065 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:58.065 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:58.065 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:59.001 00:19:59.001 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.001 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.001 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.259 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.259 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.259 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.259 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.259 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.259 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:59.259 { 00:19:59.259 "cntlid": 47, 00:19:59.259 "qid": 0, 00:19:59.259 "state": "enabled", 00:19:59.259 "thread": "nvmf_tgt_poll_group_000", 00:19:59.259 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:59.259 "listen_address": { 00:19:59.259 "trtype": "TCP", 00:19:59.259 "adrfam": "IPv4", 00:19:59.259 "traddr": "10.0.0.2", 00:19:59.259 "trsvcid": "4420" 00:19:59.259 }, 00:19:59.259 "peer_address": { 00:19:59.259 "trtype": "TCP", 00:19:59.259 "adrfam": "IPv4", 00:19:59.259 "traddr": "10.0.0.1", 00:19:59.259 "trsvcid": "45402" 00:19:59.259 }, 00:19:59.259 "auth": { 00:19:59.259 "state": "completed", 00:19:59.259 "digest": "sha256", 00:19:59.259 "dhgroup": "ffdhe8192" 00:19:59.259 } 00:19:59.259 } 00:19:59.259 ]' 00:19:59.259 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:59.259 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:59.518 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:59.518 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:59.518 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:59.518 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.518 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.518 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.776 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:19:59.776 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:20:00.716 11:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.716 11:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:00.716 11:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.716 11:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.716 11:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.716 11:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:00.716 11:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:00.716 11:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:00.716 11:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:00.716 11:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:00.975 11:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:00.975 11:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.975 11:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:00.975 11:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:00.975 11:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:00.975 11:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.975 11:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.975 11:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.975 11:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.975 11:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.975 11:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.975 11:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.975 11:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.541 00:20:01.541 11:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:01.541 11:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.541 11:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:01.800 11:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.800 11:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.800 11:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.800 11:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.800 11:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.800 11:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:01.800 { 00:20:01.800 "cntlid": 49, 00:20:01.800 "qid": 0, 00:20:01.800 "state": "enabled", 00:20:01.800 "thread": "nvmf_tgt_poll_group_000", 00:20:01.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:01.800 "listen_address": { 00:20:01.800 "trtype": "TCP", 00:20:01.800 "adrfam": "IPv4", 00:20:01.800 "traddr": "10.0.0.2", 00:20:01.800 "trsvcid": "4420" 00:20:01.800 }, 00:20:01.800 "peer_address": { 00:20:01.800 "trtype": "TCP", 00:20:01.800 "adrfam": "IPv4", 00:20:01.800 "traddr": "10.0.0.1", 00:20:01.800 "trsvcid": "45430" 00:20:01.800 }, 00:20:01.800 "auth": { 00:20:01.800 "state": "completed", 00:20:01.800 "digest": "sha384", 00:20:01.800 "dhgroup": "null" 00:20:01.800 } 00:20:01.800 } 00:20:01.800 ]' 00:20:01.800 11:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:01.800 11:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:01.800 11:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:01.800 11:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:01.800 11:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:01.800 11:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.800 11:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.800 11:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.059 11:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:20:02.059 11:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:20:02.996 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.996 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:02.996 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.996 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.996 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.996 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.996 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:02.996 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:03.254 11:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:03.254 11:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.254 11:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:03.254 11:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:03.254 11:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:03.254 11:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.255 11:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.255 11:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.255 11:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.511 11:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.511 11:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.511 11:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.511 11:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.769 00:20:03.769 11:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:03.769 11:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:03.769 11:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.028 11:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.028 11:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.028 11:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.028 11:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.028 11:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.028 11:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.028 { 00:20:04.028 "cntlid": 51, 00:20:04.028 "qid": 0, 00:20:04.028 "state": "enabled", 00:20:04.028 "thread": "nvmf_tgt_poll_group_000", 00:20:04.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:04.028 "listen_address": { 00:20:04.028 "trtype": "TCP", 00:20:04.028 "adrfam": "IPv4", 00:20:04.028 "traddr": "10.0.0.2", 00:20:04.028 "trsvcid": "4420" 00:20:04.028 }, 00:20:04.028 "peer_address": { 00:20:04.028 "trtype": "TCP", 00:20:04.028 "adrfam": "IPv4", 00:20:04.028 "traddr": "10.0.0.1", 00:20:04.028 "trsvcid": "45458" 00:20:04.028 }, 00:20:04.028 "auth": { 00:20:04.028 "state": "completed", 00:20:04.028 "digest": "sha384", 00:20:04.028 "dhgroup": "null" 00:20:04.028 } 00:20:04.028 } 00:20:04.028 ]' 00:20:04.028 11:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.028 11:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:04.028 11:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.028 11:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:04.028 11:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.028 11:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.028 11:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.028 11:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.287 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: --dhchap-ctrl-secret DHHC-1:02:YWZiYzQwZThiZjZhNGQ0NDJkMjMyNzUxMzMyMmM1NTcyOGQ4OGJjZDgxZWM1NzBmtzwEkQ==: 00:20:04.287 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: --dhchap-ctrl-secret DHHC-1:02:YWZiYzQwZThiZjZhNGQ0NDJkMjMyNzUxMzMyMmM1NTcyOGQ4OGJjZDgxZWM1NzBmtzwEkQ==: 00:20:05.224 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.482 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:05.482 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.482 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.482 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.482 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.482 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:05.482 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:05.740 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:05.740 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.740 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:05.740 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:05.740 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:05.740 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.740 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.740 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.740 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.741 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.741 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.741 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.741 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.998 00:20:05.999 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.999 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.999 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.257 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.257 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.257 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.257 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.257 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.257 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.257 { 00:20:06.257 "cntlid": 53, 00:20:06.257 "qid": 0, 00:20:06.257 "state": "enabled", 00:20:06.257 "thread": "nvmf_tgt_poll_group_000", 00:20:06.257 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:06.257 "listen_address": { 00:20:06.257 "trtype": "TCP", 00:20:06.257 "adrfam": "IPv4", 00:20:06.257 "traddr": "10.0.0.2", 00:20:06.257 "trsvcid": "4420" 00:20:06.257 }, 00:20:06.257 "peer_address": { 00:20:06.257 "trtype": "TCP", 00:20:06.257 "adrfam": "IPv4", 00:20:06.257 "traddr": "10.0.0.1", 00:20:06.257 "trsvcid": "44126" 00:20:06.257 }, 00:20:06.257 "auth": { 00:20:06.257 "state": "completed", 00:20:06.257 "digest": "sha384", 00:20:06.257 "dhgroup": "null" 00:20:06.257 } 00:20:06.257 } 00:20:06.257 ]' 00:20:06.257 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.257 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:06.257 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.257 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:06.257 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.515 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.515 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.515 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.773 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:01:NDM3MjAyMTY5ZGZmOTQ4ZmZjYjQyMDc5NGU2NGRiNWO9PzHW: 00:20:06.773 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:01:NDM3MjAyMTY5ZGZmOTQ4ZmZjYjQyMDc5NGU2NGRiNWO9PzHW: 00:20:07.711 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.711 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:07.711 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.711 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.712 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.712 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.712 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:07.712 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:07.971 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:07.971 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.971 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:07.971 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:07.971 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:07.971 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.971 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:07.971 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.971 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.971 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.971 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:07.971 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:07.971 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:08.229 00:20:08.230 11:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.230 11:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.230 11:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.488 11:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.488 11:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.488 11:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.488 11:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.488 11:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.488 11:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.488 { 00:20:08.488 "cntlid": 55, 00:20:08.488 "qid": 0, 00:20:08.488 "state": "enabled", 00:20:08.488 "thread": "nvmf_tgt_poll_group_000", 00:20:08.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:08.488 "listen_address": { 00:20:08.488 "trtype": "TCP", 00:20:08.488 "adrfam": "IPv4", 00:20:08.488 "traddr": "10.0.0.2", 00:20:08.488 "trsvcid": "4420" 00:20:08.488 }, 00:20:08.488 "peer_address": { 00:20:08.488 "trtype": "TCP", 00:20:08.488 "adrfam": "IPv4", 00:20:08.488 "traddr": "10.0.0.1", 00:20:08.488 "trsvcid": "44156" 00:20:08.488 }, 00:20:08.488 "auth": { 00:20:08.488 "state": "completed", 00:20:08.488 "digest": "sha384", 00:20:08.488 "dhgroup": "null" 00:20:08.488 } 00:20:08.488 } 00:20:08.488 ]' 00:20:08.488 11:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.488 11:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:08.488 11:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.746 11:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:08.746 11:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.746 11:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.746 11:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.746 11:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.004 11:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:20:09.004 11:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:20:09.941 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.941 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.941 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.941 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.941 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.941 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:09.941 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.941 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:09.941 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:10.199 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:10.199 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.199 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:10.199 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:10.199 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:10.199 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.199 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.199 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.199 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.199 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.199 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.199 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.199 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.769 00:20:10.769 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.769 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.769 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.769 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.769 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.769 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.769 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.028 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.028 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.028 { 00:20:11.028 "cntlid": 57, 00:20:11.028 "qid": 0, 00:20:11.028 "state": "enabled", 00:20:11.028 "thread": "nvmf_tgt_poll_group_000", 00:20:11.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:11.028 "listen_address": { 00:20:11.028 "trtype": "TCP", 00:20:11.028 "adrfam": "IPv4", 00:20:11.028 "traddr": "10.0.0.2", 00:20:11.028 "trsvcid": "4420" 00:20:11.028 }, 00:20:11.028 "peer_address": { 00:20:11.028 "trtype": "TCP", 00:20:11.028 "adrfam": "IPv4", 00:20:11.028 "traddr": "10.0.0.1", 00:20:11.028 "trsvcid": "44184" 00:20:11.028 }, 00:20:11.028 "auth": { 00:20:11.028 "state": "completed", 00:20:11.028 "digest": "sha384", 00:20:11.028 "dhgroup": "ffdhe2048" 00:20:11.028 } 00:20:11.028 } 00:20:11.028 ]' 00:20:11.028 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.028 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:11.028 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.028 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:11.028 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.028 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.028 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.028 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.286 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:20:11.286 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:20:12.221 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.221 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.221 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.221 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.221 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.221 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.221 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:12.221 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:12.480 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:12.480 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.480 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:12.480 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:12.480 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:12.480 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.480 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.480 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.480 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.480 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.480 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.480 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.480 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.050 00:20:13.050 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.050 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.050 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.309 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.309 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.309 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.309 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.309 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.309 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.309 { 00:20:13.309 "cntlid": 59, 00:20:13.309 "qid": 0, 00:20:13.309 "state": "enabled", 00:20:13.309 "thread": "nvmf_tgt_poll_group_000", 00:20:13.309 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:13.309 "listen_address": { 00:20:13.309 "trtype": "TCP", 00:20:13.309 "adrfam": "IPv4", 00:20:13.309 "traddr": "10.0.0.2", 00:20:13.309 "trsvcid": "4420" 00:20:13.309 }, 00:20:13.309 "peer_address": { 00:20:13.309 "trtype": "TCP", 00:20:13.309 "adrfam": "IPv4", 00:20:13.309 "traddr": "10.0.0.1", 00:20:13.309 "trsvcid": "44216" 00:20:13.309 }, 00:20:13.309 "auth": { 00:20:13.309 "state": "completed", 00:20:13.309 "digest": "sha384", 00:20:13.309 "dhgroup": "ffdhe2048" 00:20:13.309 } 00:20:13.309 } 00:20:13.309 ]' 00:20:13.309 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.309 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:13.309 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.309 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:13.309 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.309 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.309 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.309 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.569 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: --dhchap-ctrl-secret DHHC-1:02:YWZiYzQwZThiZjZhNGQ0NDJkMjMyNzUxMzMyMmM1NTcyOGQ4OGJjZDgxZWM1NzBmtzwEkQ==: 00:20:13.569 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: --dhchap-ctrl-secret DHHC-1:02:YWZiYzQwZThiZjZhNGQ0NDJkMjMyNzUxMzMyMmM1NTcyOGQ4OGJjZDgxZWM1NzBmtzwEkQ==: 00:20:14.944 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.944 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.944 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.944 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.944 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.944 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.944 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:14.944 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:14.944 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:14.944 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.944 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:14.944 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:14.944 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:14.944 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.944 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.944 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.944 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.945 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.945 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.945 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.945 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.256 00:20:15.256 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:15.256 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.256 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:15.538 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.538 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.538 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.538 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.539 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.539 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.539 { 00:20:15.539 "cntlid": 61, 00:20:15.539 "qid": 0, 00:20:15.539 "state": "enabled", 00:20:15.539 "thread": "nvmf_tgt_poll_group_000", 00:20:15.539 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:15.539 "listen_address": { 00:20:15.539 "trtype": "TCP", 00:20:15.539 "adrfam": "IPv4", 00:20:15.539 "traddr": "10.0.0.2", 00:20:15.539 "trsvcid": "4420" 00:20:15.539 }, 00:20:15.539 "peer_address": { 00:20:15.539 "trtype": "TCP", 00:20:15.539 "adrfam": "IPv4", 00:20:15.539 "traddr": "10.0.0.1", 00:20:15.539 "trsvcid": "57752" 00:20:15.539 }, 00:20:15.539 "auth": { 00:20:15.539 "state": "completed", 00:20:15.539 "digest": "sha384", 00:20:15.539 "dhgroup": "ffdhe2048" 00:20:15.539 } 00:20:15.539 } 00:20:15.539 ]' 00:20:15.539 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.539 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.539 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.797 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:15.797 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.797 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.797 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.797 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.055 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:01:NDM3MjAyMTY5ZGZmOTQ4ZmZjYjQyMDc5NGU2NGRiNWO9PzHW: 00:20:16.055 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:01:NDM3MjAyMTY5ZGZmOTQ4ZmZjYjQyMDc5NGU2NGRiNWO9PzHW: 00:20:16.989 11:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.990 11:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.990 11:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.990 11:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.990 11:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.990 11:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.990 11:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:16.990 11:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:17.248 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:17.248 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.248 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:17.248 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:17.248 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:17.248 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.248 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:17.248 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.248 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.248 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.248 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:17.248 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:17.248 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:17.506 00:20:17.764 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.764 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.764 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.022 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.022 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.022 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.022 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.022 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.022 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.022 { 00:20:18.022 "cntlid": 63, 00:20:18.022 "qid": 0, 00:20:18.022 "state": "enabled", 00:20:18.022 "thread": "nvmf_tgt_poll_group_000", 00:20:18.022 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:18.022 "listen_address": { 00:20:18.022 "trtype": "TCP", 00:20:18.022 "adrfam": "IPv4", 00:20:18.022 "traddr": "10.0.0.2", 00:20:18.022 "trsvcid": "4420" 00:20:18.022 }, 00:20:18.022 "peer_address": { 00:20:18.022 "trtype": "TCP", 00:20:18.022 "adrfam": "IPv4", 00:20:18.022 "traddr": "10.0.0.1", 00:20:18.022 "trsvcid": "57778" 00:20:18.022 }, 00:20:18.022 "auth": { 00:20:18.022 "state": "completed", 00:20:18.022 "digest": "sha384", 00:20:18.022 "dhgroup": "ffdhe2048" 00:20:18.022 } 00:20:18.022 } 00:20:18.022 ]' 00:20:18.022 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.022 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:18.022 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.022 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:18.022 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.022 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.022 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.022 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.281 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:20:18.281 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:20:19.214 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.214 11:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:19.214 11:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.214 11:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.214 11:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.214 11:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:19.214 11:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.214 11:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:19.214 11:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:19.472 11:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:19.472 11:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.472 11:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:19.472 11:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:19.472 11:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:19.472 11:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.472 11:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.472 11:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.472 11:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.472 11:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.472 11:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.472 11:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.473 11:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.038 00:20:20.038 11:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:20.038 11:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:20.038 11:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.297 11:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.297 11:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.297 11:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.297 11:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.297 11:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.297 11:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.297 { 00:20:20.297 "cntlid": 65, 00:20:20.297 "qid": 0, 00:20:20.297 "state": "enabled", 00:20:20.297 "thread": "nvmf_tgt_poll_group_000", 00:20:20.297 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:20.297 "listen_address": { 00:20:20.297 "trtype": "TCP", 00:20:20.297 "adrfam": "IPv4", 00:20:20.297 "traddr": "10.0.0.2", 00:20:20.297 "trsvcid": "4420" 00:20:20.297 }, 00:20:20.297 "peer_address": { 00:20:20.297 "trtype": "TCP", 00:20:20.297 "adrfam": "IPv4", 00:20:20.297 "traddr": "10.0.0.1", 00:20:20.297 "trsvcid": "57808" 00:20:20.297 }, 00:20:20.297 "auth": { 00:20:20.297 "state": "completed", 00:20:20.297 "digest": "sha384", 00:20:20.297 "dhgroup": "ffdhe3072" 00:20:20.297 } 00:20:20.297 } 00:20:20.297 ]' 00:20:20.297 11:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.297 11:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:20.297 11:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.297 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:20.297 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.297 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.297 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.297 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.555 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:20:20.555 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:20:21.928 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.929 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:21.929 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.929 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.929 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.929 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.929 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:21.929 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:21.929 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:21.929 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.929 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:21.929 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:21.929 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:21.929 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.929 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.929 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.929 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.929 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.929 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.929 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.929 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.495 00:20:22.495 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.495 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.495 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.753 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.753 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.753 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.753 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.753 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.753 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.753 { 00:20:22.753 "cntlid": 67, 00:20:22.753 "qid": 0, 00:20:22.753 "state": "enabled", 00:20:22.753 "thread": "nvmf_tgt_poll_group_000", 00:20:22.753 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:22.753 "listen_address": { 00:20:22.753 "trtype": "TCP", 00:20:22.753 "adrfam": "IPv4", 00:20:22.753 "traddr": "10.0.0.2", 00:20:22.753 "trsvcid": "4420" 00:20:22.753 }, 00:20:22.753 "peer_address": { 00:20:22.753 "trtype": "TCP", 00:20:22.753 "adrfam": "IPv4", 00:20:22.753 "traddr": "10.0.0.1", 00:20:22.753 "trsvcid": "57840" 00:20:22.753 }, 00:20:22.753 "auth": { 00:20:22.753 "state": "completed", 00:20:22.753 "digest": "sha384", 00:20:22.753 "dhgroup": "ffdhe3072" 00:20:22.753 } 00:20:22.753 } 00:20:22.753 ]' 00:20:22.753 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.753 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:22.753 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.753 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:22.753 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.753 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.753 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.753 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.011 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: --dhchap-ctrl-secret DHHC-1:02:YWZiYzQwZThiZjZhNGQ0NDJkMjMyNzUxMzMyMmM1NTcyOGQ4OGJjZDgxZWM1NzBmtzwEkQ==: 00:20:23.012 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: --dhchap-ctrl-secret DHHC-1:02:YWZiYzQwZThiZjZhNGQ0NDJkMjMyNzUxMzMyMmM1NTcyOGQ4OGJjZDgxZWM1NzBmtzwEkQ==: 00:20:23.945 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.204 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.204 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.205 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.205 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.205 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.205 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:24.205 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:24.463 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:24.463 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.463 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:24.463 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:24.463 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:24.463 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.463 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.463 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.463 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.463 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.463 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.463 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.463 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.721 00:20:24.721 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.721 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.721 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.979 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.979 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.979 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.979 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.979 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.979 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.979 { 00:20:24.979 "cntlid": 69, 00:20:24.979 "qid": 0, 00:20:24.979 "state": "enabled", 00:20:24.979 "thread": "nvmf_tgt_poll_group_000", 00:20:24.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:24.979 "listen_address": { 00:20:24.979 "trtype": "TCP", 00:20:24.979 "adrfam": "IPv4", 00:20:24.979 "traddr": "10.0.0.2", 00:20:24.979 "trsvcid": "4420" 00:20:24.979 }, 00:20:24.979 "peer_address": { 00:20:24.979 "trtype": "TCP", 00:20:24.979 "adrfam": "IPv4", 00:20:24.979 "traddr": "10.0.0.1", 00:20:24.979 "trsvcid": "55310" 00:20:24.979 }, 00:20:24.979 "auth": { 00:20:24.979 "state": "completed", 00:20:24.979 "digest": "sha384", 00:20:24.979 "dhgroup": "ffdhe3072" 00:20:24.979 } 00:20:24.979 } 00:20:24.979 ]' 00:20:24.979 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:25.237 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:25.238 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.238 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:25.238 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:25.238 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.238 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.238 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.495 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:01:NDM3MjAyMTY5ZGZmOTQ4ZmZjYjQyMDc5NGU2NGRiNWO9PzHW: 00:20:25.495 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:01:NDM3MjAyMTY5ZGZmOTQ4ZmZjYjQyMDc5NGU2NGRiNWO9PzHW: 00:20:26.428 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.428 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.428 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.428 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.428 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.428 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:26.428 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:26.428 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:26.686 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:26.686 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:26.686 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:26.686 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:26.686 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:26.686 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.686 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:26.686 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.686 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.686 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.686 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:26.686 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:26.686 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:27.253 00:20:27.253 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.253 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.253 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.253 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.253 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.253 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.253 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.253 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.253 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.253 { 00:20:27.253 "cntlid": 71, 00:20:27.253 "qid": 0, 00:20:27.253 "state": "enabled", 00:20:27.253 "thread": "nvmf_tgt_poll_group_000", 00:20:27.253 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:27.253 "listen_address": { 00:20:27.253 "trtype": "TCP", 00:20:27.253 "adrfam": "IPv4", 00:20:27.253 "traddr": "10.0.0.2", 00:20:27.253 "trsvcid": "4420" 00:20:27.253 }, 00:20:27.253 "peer_address": { 00:20:27.253 "trtype": "TCP", 00:20:27.253 "adrfam": "IPv4", 00:20:27.253 "traddr": "10.0.0.1", 00:20:27.253 "trsvcid": "55336" 00:20:27.253 }, 00:20:27.253 "auth": { 00:20:27.253 "state": "completed", 00:20:27.253 "digest": "sha384", 00:20:27.253 "dhgroup": "ffdhe3072" 00:20:27.253 } 00:20:27.253 } 00:20:27.253 ]' 00:20:27.512 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.512 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.512 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.512 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:27.512 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.512 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.512 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.512 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.770 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:20:27.770 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:20:28.707 11:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.707 11:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:28.707 11:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.707 11:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.707 11:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.707 11:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:28.707 11:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.707 11:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:28.707 11:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:29.274 11:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:29.274 11:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.274 11:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:29.274 11:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:29.274 11:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:29.274 11:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.274 11:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.274 11:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.274 11:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.274 11:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.274 11:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.274 11:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.274 11:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.532 00:20:29.532 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.532 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.532 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.791 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.791 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.791 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.791 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.791 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.791 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.791 { 00:20:29.791 "cntlid": 73, 00:20:29.791 "qid": 0, 00:20:29.791 "state": "enabled", 00:20:29.791 "thread": "nvmf_tgt_poll_group_000", 00:20:29.791 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:29.791 "listen_address": { 00:20:29.791 "trtype": "TCP", 00:20:29.791 "adrfam": "IPv4", 00:20:29.791 "traddr": "10.0.0.2", 00:20:29.791 "trsvcid": "4420" 00:20:29.791 }, 00:20:29.791 "peer_address": { 00:20:29.791 "trtype": "TCP", 00:20:29.791 "adrfam": "IPv4", 00:20:29.791 "traddr": "10.0.0.1", 00:20:29.791 "trsvcid": "55360" 00:20:29.791 }, 00:20:29.791 "auth": { 00:20:29.791 "state": "completed", 00:20:29.791 "digest": "sha384", 00:20:29.791 "dhgroup": "ffdhe4096" 00:20:29.791 } 00:20:29.791 } 00:20:29.791 ]' 00:20:29.791 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.791 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.791 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.791 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:29.791 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.051 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.051 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.051 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.309 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:20:30.309 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:20:31.242 11:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.242 11:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.242 11:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.242 11:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.242 11:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.242 11:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.242 11:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:31.242 11:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:31.501 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:31.501 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.501 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:31.501 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:31.501 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:31.501 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.501 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.501 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.501 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.501 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.501 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.501 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.501 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.068 00:20:32.068 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.068 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.068 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.327 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.327 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.327 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.327 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.327 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.327 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.327 { 00:20:32.327 "cntlid": 75, 00:20:32.327 "qid": 0, 00:20:32.327 "state": "enabled", 00:20:32.327 "thread": "nvmf_tgt_poll_group_000", 00:20:32.327 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:32.327 "listen_address": { 00:20:32.327 "trtype": "TCP", 00:20:32.327 "adrfam": "IPv4", 00:20:32.327 "traddr": "10.0.0.2", 00:20:32.327 "trsvcid": "4420" 00:20:32.327 }, 00:20:32.327 "peer_address": { 00:20:32.327 "trtype": "TCP", 00:20:32.327 "adrfam": "IPv4", 00:20:32.327 "traddr": "10.0.0.1", 00:20:32.327 "trsvcid": "55376" 00:20:32.327 }, 00:20:32.327 "auth": { 00:20:32.327 "state": "completed", 00:20:32.327 "digest": "sha384", 00:20:32.327 "dhgroup": "ffdhe4096" 00:20:32.327 } 00:20:32.327 } 00:20:32.327 ]' 00:20:32.327 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.327 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.327 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.327 11:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:32.327 11:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.327 11:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.327 11:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.327 11:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.585 11:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: --dhchap-ctrl-secret DHHC-1:02:YWZiYzQwZThiZjZhNGQ0NDJkMjMyNzUxMzMyMmM1NTcyOGQ4OGJjZDgxZWM1NzBmtzwEkQ==: 00:20:32.585 11:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: --dhchap-ctrl-secret DHHC-1:02:YWZiYzQwZThiZjZhNGQ0NDJkMjMyNzUxMzMyMmM1NTcyOGQ4OGJjZDgxZWM1NzBmtzwEkQ==: 00:20:33.521 11:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.521 11:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:33.521 11:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.521 11:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.521 11:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.522 11:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.522 11:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:33.522 11:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:33.779 11:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:33.779 11:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.779 11:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:33.779 11:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:33.779 11:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:33.779 11:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.779 11:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.779 11:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.779 11:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.779 11:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.779 11:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.779 11:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.779 11:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.347 00:20:34.347 11:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.347 11:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.347 11:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.605 11:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.605 11:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.605 11:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.605 11:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.605 11:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.605 11:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.605 { 00:20:34.605 "cntlid": 77, 00:20:34.605 "qid": 0, 00:20:34.605 "state": "enabled", 00:20:34.605 "thread": "nvmf_tgt_poll_group_000", 00:20:34.605 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:34.605 "listen_address": { 00:20:34.605 "trtype": "TCP", 00:20:34.606 "adrfam": "IPv4", 00:20:34.606 "traddr": "10.0.0.2", 00:20:34.606 "trsvcid": "4420" 00:20:34.606 }, 00:20:34.606 "peer_address": { 00:20:34.606 "trtype": "TCP", 00:20:34.606 "adrfam": "IPv4", 00:20:34.606 "traddr": "10.0.0.1", 00:20:34.606 "trsvcid": "41176" 00:20:34.606 }, 00:20:34.606 "auth": { 00:20:34.606 "state": "completed", 00:20:34.606 "digest": "sha384", 00:20:34.606 "dhgroup": "ffdhe4096" 00:20:34.606 } 00:20:34.606 } 00:20:34.606 ]' 00:20:34.606 11:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.606 11:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.606 11:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.606 11:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:34.606 11:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.606 11:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.606 11:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.606 11:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.865 11:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:01:NDM3MjAyMTY5ZGZmOTQ4ZmZjYjQyMDc5NGU2NGRiNWO9PzHW: 00:20:34.866 11:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:01:NDM3MjAyMTY5ZGZmOTQ4ZmZjYjQyMDc5NGU2NGRiNWO9PzHW: 00:20:36.243 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.243 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:36.243 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.243 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.243 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.243 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.243 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:36.243 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:36.243 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:36.243 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.243 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:36.243 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:36.243 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:36.243 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.243 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:36.243 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.243 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.243 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.243 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:36.243 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:36.243 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:36.813 00:20:36.813 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.813 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.813 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.813 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.813 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.813 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.813 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.071 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.071 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.071 { 00:20:37.071 "cntlid": 79, 00:20:37.071 "qid": 0, 00:20:37.071 "state": "enabled", 00:20:37.071 "thread": "nvmf_tgt_poll_group_000", 00:20:37.072 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:37.072 "listen_address": { 00:20:37.072 "trtype": "TCP", 00:20:37.072 "adrfam": "IPv4", 00:20:37.072 "traddr": "10.0.0.2", 00:20:37.072 "trsvcid": "4420" 00:20:37.072 }, 00:20:37.072 "peer_address": { 00:20:37.072 "trtype": "TCP", 00:20:37.072 "adrfam": "IPv4", 00:20:37.072 "traddr": "10.0.0.1", 00:20:37.072 "trsvcid": "41190" 00:20:37.072 }, 00:20:37.072 "auth": { 00:20:37.072 "state": "completed", 00:20:37.072 "digest": "sha384", 00:20:37.072 "dhgroup": "ffdhe4096" 00:20:37.072 } 00:20:37.072 } 00:20:37.072 ]' 00:20:37.072 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.072 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.072 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.072 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:37.072 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.072 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.072 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.072 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.331 11:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:20:37.331 11:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:20:38.265 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.265 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:38.265 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.265 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.265 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.265 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:38.265 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.265 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:38.265 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:38.524 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:38.524 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.524 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:38.524 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:38.524 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:38.524 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.524 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.524 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.524 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.784 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.784 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.784 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.784 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.353 00:20:39.353 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.353 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.353 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.611 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.611 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.611 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.611 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.611 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.611 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.611 { 00:20:39.611 "cntlid": 81, 00:20:39.611 "qid": 0, 00:20:39.611 "state": "enabled", 00:20:39.611 "thread": "nvmf_tgt_poll_group_000", 00:20:39.611 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:39.611 "listen_address": { 00:20:39.611 "trtype": "TCP", 00:20:39.611 "adrfam": "IPv4", 00:20:39.611 "traddr": "10.0.0.2", 00:20:39.611 "trsvcid": "4420" 00:20:39.611 }, 00:20:39.611 "peer_address": { 00:20:39.611 "trtype": "TCP", 00:20:39.611 "adrfam": "IPv4", 00:20:39.611 "traddr": "10.0.0.1", 00:20:39.611 "trsvcid": "41224" 00:20:39.611 }, 00:20:39.611 "auth": { 00:20:39.611 "state": "completed", 00:20:39.611 "digest": "sha384", 00:20:39.612 "dhgroup": "ffdhe6144" 00:20:39.612 } 00:20:39.612 } 00:20:39.612 ]' 00:20:39.612 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.612 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.612 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.612 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:39.612 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.612 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.612 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.612 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.870 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:20:39.870 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:20:40.806 11:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.806 11:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.806 11:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.806 11:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.806 11:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.806 11:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.806 11:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:40.806 11:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:41.064 11:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:41.064 11:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.064 11:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:41.065 11:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:41.065 11:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:41.065 11:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.065 11:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.065 11:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.065 11:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.065 11:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.065 11:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.065 11:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.065 11:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.630 00:20:41.630 11:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:41.630 11:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.630 11:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.888 11:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.888 11:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.888 11:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.888 11:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.888 11:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.888 11:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.888 { 00:20:41.888 "cntlid": 83, 00:20:41.888 "qid": 0, 00:20:41.888 "state": "enabled", 00:20:41.888 "thread": "nvmf_tgt_poll_group_000", 00:20:41.888 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:41.888 "listen_address": { 00:20:41.888 "trtype": "TCP", 00:20:41.888 "adrfam": "IPv4", 00:20:41.888 "traddr": "10.0.0.2", 00:20:41.888 "trsvcid": "4420" 00:20:41.888 }, 00:20:41.888 "peer_address": { 00:20:41.888 "trtype": "TCP", 00:20:41.888 "adrfam": "IPv4", 00:20:41.888 "traddr": "10.0.0.1", 00:20:41.888 "trsvcid": "41256" 00:20:41.888 }, 00:20:41.888 "auth": { 00:20:41.888 "state": "completed", 00:20:41.888 "digest": "sha384", 00:20:41.888 "dhgroup": "ffdhe6144" 00:20:41.888 } 00:20:41.888 } 00:20:41.888 ]' 00:20:41.888 11:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.146 11:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.146 11:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.146 11:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:42.146 11:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.146 11:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.146 11:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.146 11:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.404 11:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: --dhchap-ctrl-secret DHHC-1:02:YWZiYzQwZThiZjZhNGQ0NDJkMjMyNzUxMzMyMmM1NTcyOGQ4OGJjZDgxZWM1NzBmtzwEkQ==: 00:20:42.404 11:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: --dhchap-ctrl-secret DHHC-1:02:YWZiYzQwZThiZjZhNGQ0NDJkMjMyNzUxMzMyMmM1NTcyOGQ4OGJjZDgxZWM1NzBmtzwEkQ==: 00:20:43.339 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.339 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.339 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.339 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.339 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.339 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.339 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:43.339 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:43.596 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:43.596 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.596 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:43.596 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:43.596 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:43.596 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.596 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.596 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.596 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.596 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.596 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.596 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.596 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.532 00:20:44.532 11:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.532 11:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.532 11:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.532 11:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.532 11:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.532 11:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.532 11:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.791 11:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.791 11:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.791 { 00:20:44.791 "cntlid": 85, 00:20:44.791 "qid": 0, 00:20:44.791 "state": "enabled", 00:20:44.791 "thread": "nvmf_tgt_poll_group_000", 00:20:44.791 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:44.791 "listen_address": { 00:20:44.791 "trtype": "TCP", 00:20:44.791 "adrfam": "IPv4", 00:20:44.791 "traddr": "10.0.0.2", 00:20:44.791 "trsvcid": "4420" 00:20:44.791 }, 00:20:44.791 "peer_address": { 00:20:44.791 "trtype": "TCP", 00:20:44.791 "adrfam": "IPv4", 00:20:44.791 "traddr": "10.0.0.1", 00:20:44.791 "trsvcid": "41284" 00:20:44.791 }, 00:20:44.791 "auth": { 00:20:44.791 "state": "completed", 00:20:44.791 "digest": "sha384", 00:20:44.791 "dhgroup": "ffdhe6144" 00:20:44.791 } 00:20:44.791 } 00:20:44.791 ]' 00:20:44.791 11:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.791 11:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.791 11:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.791 11:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:44.791 11:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.791 11:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.791 11:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.791 11:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.049 11:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:01:NDM3MjAyMTY5ZGZmOTQ4ZmZjYjQyMDc5NGU2NGRiNWO9PzHW: 00:20:45.049 11:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:01:NDM3MjAyMTY5ZGZmOTQ4ZmZjYjQyMDc5NGU2NGRiNWO9PzHW: 00:20:46.046 11:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.046 11:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.046 11:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.046 11:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.046 11:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.046 11:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.046 11:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:46.046 11:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:46.335 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:46.335 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.335 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:46.335 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:46.335 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:46.335 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.335 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:46.335 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.335 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.335 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.335 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:46.335 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:46.335 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:46.902 00:20:46.902 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.902 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.902 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.470 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.470 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.470 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.470 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.470 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.470 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.470 { 00:20:47.470 "cntlid": 87, 00:20:47.470 "qid": 0, 00:20:47.470 "state": "enabled", 00:20:47.470 "thread": "nvmf_tgt_poll_group_000", 00:20:47.470 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:47.470 "listen_address": { 00:20:47.470 "trtype": "TCP", 00:20:47.470 "adrfam": "IPv4", 00:20:47.470 "traddr": "10.0.0.2", 00:20:47.470 "trsvcid": "4420" 00:20:47.470 }, 00:20:47.470 "peer_address": { 00:20:47.470 "trtype": "TCP", 00:20:47.470 "adrfam": "IPv4", 00:20:47.470 "traddr": "10.0.0.1", 00:20:47.470 "trsvcid": "43668" 00:20:47.470 }, 00:20:47.470 "auth": { 00:20:47.470 "state": "completed", 00:20:47.470 "digest": "sha384", 00:20:47.470 "dhgroup": "ffdhe6144" 00:20:47.470 } 00:20:47.470 } 00:20:47.470 ]' 00:20:47.470 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.470 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.470 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.470 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:47.470 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.470 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.470 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.470 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.728 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:20:47.728 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:20:48.664 11:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.664 11:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:48.664 11:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.664 11:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.664 11:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.664 11:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:48.664 11:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.664 11:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:48.664 11:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:48.923 11:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:48.923 11:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.923 11:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:48.923 11:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:48.923 11:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:48.923 11:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.923 11:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.923 11:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.923 11:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.923 11:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.923 11:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.923 11:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.923 11:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.861 00:20:49.861 11:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.861 11:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.861 11:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.119 11:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.119 11:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.119 11:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.119 11:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.119 11:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.119 11:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.119 { 00:20:50.119 "cntlid": 89, 00:20:50.119 "qid": 0, 00:20:50.119 "state": "enabled", 00:20:50.119 "thread": "nvmf_tgt_poll_group_000", 00:20:50.119 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:50.119 "listen_address": { 00:20:50.119 "trtype": "TCP", 00:20:50.119 "adrfam": "IPv4", 00:20:50.119 "traddr": "10.0.0.2", 00:20:50.119 "trsvcid": "4420" 00:20:50.119 }, 00:20:50.119 "peer_address": { 00:20:50.119 "trtype": "TCP", 00:20:50.119 "adrfam": "IPv4", 00:20:50.119 "traddr": "10.0.0.1", 00:20:50.119 "trsvcid": "43698" 00:20:50.119 }, 00:20:50.119 "auth": { 00:20:50.119 "state": "completed", 00:20:50.119 "digest": "sha384", 00:20:50.119 "dhgroup": "ffdhe8192" 00:20:50.119 } 00:20:50.119 } 00:20:50.119 ]' 00:20:50.119 11:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.377 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.377 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.377 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:50.377 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.377 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.377 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.377 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.635 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:20:50.635 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:20:51.572 11:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.572 11:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.572 11:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.572 11:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.572 11:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.572 11:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.572 11:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:51.572 11:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:51.830 11:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:51.830 11:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.830 11:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:51.830 11:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:51.830 11:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:51.830 11:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.830 11:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.830 11:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.830 11:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.830 11:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.830 11:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.830 11:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.830 11:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.763 00:20:52.763 11:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.763 11:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.763 11:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.021 11:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.021 11:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.021 11:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.021 11:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.021 11:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.021 11:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.021 { 00:20:53.021 "cntlid": 91, 00:20:53.021 "qid": 0, 00:20:53.021 "state": "enabled", 00:20:53.021 "thread": "nvmf_tgt_poll_group_000", 00:20:53.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:53.021 "listen_address": { 00:20:53.021 "trtype": "TCP", 00:20:53.021 "adrfam": "IPv4", 00:20:53.021 "traddr": "10.0.0.2", 00:20:53.021 "trsvcid": "4420" 00:20:53.021 }, 00:20:53.021 "peer_address": { 00:20:53.021 "trtype": "TCP", 00:20:53.021 "adrfam": "IPv4", 00:20:53.021 "traddr": "10.0.0.1", 00:20:53.021 "trsvcid": "43744" 00:20:53.021 }, 00:20:53.021 "auth": { 00:20:53.021 "state": "completed", 00:20:53.021 "digest": "sha384", 00:20:53.021 "dhgroup": "ffdhe8192" 00:20:53.021 } 00:20:53.021 } 00:20:53.021 ]' 00:20:53.021 11:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.021 11:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:53.021 11:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.278 11:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:53.278 11:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.278 11:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.279 11:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.279 11:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.535 11:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: --dhchap-ctrl-secret DHHC-1:02:YWZiYzQwZThiZjZhNGQ0NDJkMjMyNzUxMzMyMmM1NTcyOGQ4OGJjZDgxZWM1NzBmtzwEkQ==: 00:20:53.535 11:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: --dhchap-ctrl-secret DHHC-1:02:YWZiYzQwZThiZjZhNGQ0NDJkMjMyNzUxMzMyMmM1NTcyOGQ4OGJjZDgxZWM1NzBmtzwEkQ==: 00:20:54.468 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.468 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.468 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.468 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.468 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.468 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.468 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:54.468 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:54.725 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:54.725 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.725 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:54.726 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:54.726 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:54.726 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.726 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.726 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.726 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.726 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.726 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.726 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.726 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.658 00:20:55.658 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.658 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.658 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.915 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.915 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.915 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.916 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.916 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.916 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.916 { 00:20:55.916 "cntlid": 93, 00:20:55.916 "qid": 0, 00:20:55.916 "state": "enabled", 00:20:55.916 "thread": "nvmf_tgt_poll_group_000", 00:20:55.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:55.916 "listen_address": { 00:20:55.916 "trtype": "TCP", 00:20:55.916 "adrfam": "IPv4", 00:20:55.916 "traddr": "10.0.0.2", 00:20:55.916 "trsvcid": "4420" 00:20:55.916 }, 00:20:55.916 "peer_address": { 00:20:55.916 "trtype": "TCP", 00:20:55.916 "adrfam": "IPv4", 00:20:55.916 "traddr": "10.0.0.1", 00:20:55.916 "trsvcid": "50750" 00:20:55.916 }, 00:20:55.916 "auth": { 00:20:55.916 "state": "completed", 00:20:55.916 "digest": "sha384", 00:20:55.916 "dhgroup": "ffdhe8192" 00:20:55.916 } 00:20:55.916 } 00:20:55.916 ]' 00:20:55.916 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.916 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:55.916 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.916 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:55.916 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.173 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.173 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.173 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.431 11:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:01:NDM3MjAyMTY5ZGZmOTQ4ZmZjYjQyMDc5NGU2NGRiNWO9PzHW: 00:20:56.431 11:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:01:NDM3MjAyMTY5ZGZmOTQ4ZmZjYjQyMDc5NGU2NGRiNWO9PzHW: 00:20:57.365 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.365 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.365 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.365 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.365 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.365 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.365 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:57.365 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:57.624 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:57.624 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.624 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:57.624 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:57.624 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:57.624 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.624 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:57.624 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.624 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.624 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.624 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:57.624 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:57.624 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.556 00:20:58.556 11:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.556 11:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.556 11:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.814 11:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.814 11:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.814 11:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.814 11:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.814 11:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.814 11:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.814 { 00:20:58.814 "cntlid": 95, 00:20:58.814 "qid": 0, 00:20:58.814 "state": "enabled", 00:20:58.814 "thread": "nvmf_tgt_poll_group_000", 00:20:58.814 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:58.814 "listen_address": { 00:20:58.814 "trtype": "TCP", 00:20:58.814 "adrfam": "IPv4", 00:20:58.814 "traddr": "10.0.0.2", 00:20:58.815 "trsvcid": "4420" 00:20:58.815 }, 00:20:58.815 "peer_address": { 00:20:58.815 "trtype": "TCP", 00:20:58.815 "adrfam": "IPv4", 00:20:58.815 "traddr": "10.0.0.1", 00:20:58.815 "trsvcid": "50784" 00:20:58.815 }, 00:20:58.815 "auth": { 00:20:58.815 "state": "completed", 00:20:58.815 "digest": "sha384", 00:20:58.815 "dhgroup": "ffdhe8192" 00:20:58.815 } 00:20:58.815 } 00:20:58.815 ]' 00:20:58.815 11:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.815 11:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:58.815 11:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.073 11:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:59.073 11:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.073 11:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.073 11:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.073 11:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.332 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:20:59.332 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:21:00.265 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.266 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.266 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.266 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.266 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.266 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:00.266 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:00.266 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.266 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:00.266 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:00.523 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:00.523 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.523 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:00.523 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:00.523 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:00.523 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.523 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.523 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.523 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.523 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.524 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.524 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.524 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.089 00:21:01.089 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.089 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.089 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.347 11:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.347 11:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.347 11:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.347 11:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.347 11:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.347 11:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.347 { 00:21:01.347 "cntlid": 97, 00:21:01.347 "qid": 0, 00:21:01.347 "state": "enabled", 00:21:01.347 "thread": "nvmf_tgt_poll_group_000", 00:21:01.347 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:01.347 "listen_address": { 00:21:01.347 "trtype": "TCP", 00:21:01.347 "adrfam": "IPv4", 00:21:01.347 "traddr": "10.0.0.2", 00:21:01.347 "trsvcid": "4420" 00:21:01.347 }, 00:21:01.347 "peer_address": { 00:21:01.347 "trtype": "TCP", 00:21:01.347 "adrfam": "IPv4", 00:21:01.347 "traddr": "10.0.0.1", 00:21:01.347 "trsvcid": "50808" 00:21:01.347 }, 00:21:01.347 "auth": { 00:21:01.347 "state": "completed", 00:21:01.347 "digest": "sha512", 00:21:01.347 "dhgroup": "null" 00:21:01.347 } 00:21:01.347 } 00:21:01.347 ]' 00:21:01.347 11:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.347 11:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:01.347 11:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.347 11:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:01.347 11:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.347 11:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.347 11:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.347 11:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.605 11:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:21:01.605 11:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:21:02.979 11:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.979 11:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.979 11:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.979 11:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.979 11:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.979 11:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.979 11:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:02.979 11:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:02.979 11:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:02.979 11:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.979 11:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:02.979 11:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:02.979 11:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:02.979 11:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.979 11:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.979 11:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.979 11:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.979 11:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.979 11:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.979 11:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.979 11:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.544 00:21:03.544 11:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.544 11:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.544 11:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.544 11:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.544 11:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.544 11:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.544 11:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.801 11:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.801 11:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.801 { 00:21:03.801 "cntlid": 99, 00:21:03.801 "qid": 0, 00:21:03.801 "state": "enabled", 00:21:03.801 "thread": "nvmf_tgt_poll_group_000", 00:21:03.801 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:03.801 "listen_address": { 00:21:03.801 "trtype": "TCP", 00:21:03.801 "adrfam": "IPv4", 00:21:03.801 "traddr": "10.0.0.2", 00:21:03.801 "trsvcid": "4420" 00:21:03.801 }, 00:21:03.801 "peer_address": { 00:21:03.801 "trtype": "TCP", 00:21:03.801 "adrfam": "IPv4", 00:21:03.801 "traddr": "10.0.0.1", 00:21:03.801 "trsvcid": "50826" 00:21:03.801 }, 00:21:03.801 "auth": { 00:21:03.801 "state": "completed", 00:21:03.801 "digest": "sha512", 00:21:03.801 "dhgroup": "null" 00:21:03.801 } 00:21:03.801 } 00:21:03.802 ]' 00:21:03.802 11:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.802 11:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:03.802 11:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.802 11:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:03.802 11:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.802 11:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.802 11:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.802 11:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.059 11:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: --dhchap-ctrl-secret DHHC-1:02:YWZiYzQwZThiZjZhNGQ0NDJkMjMyNzUxMzMyMmM1NTcyOGQ4OGJjZDgxZWM1NzBmtzwEkQ==: 00:21:04.060 11:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: --dhchap-ctrl-secret DHHC-1:02:YWZiYzQwZThiZjZhNGQ0NDJkMjMyNzUxMzMyMmM1NTcyOGQ4OGJjZDgxZWM1NzBmtzwEkQ==: 00:21:04.992 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.992 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:04.992 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.992 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.992 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.992 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.992 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:04.992 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:05.250 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:05.250 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.250 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:05.250 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:05.250 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:05.250 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.250 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.250 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.250 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.250 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.250 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.250 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.250 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.508 00:21:05.508 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.508 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.508 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.075 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.075 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.075 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.075 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.075 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.075 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.075 { 00:21:06.075 "cntlid": 101, 00:21:06.075 "qid": 0, 00:21:06.075 "state": "enabled", 00:21:06.075 "thread": "nvmf_tgt_poll_group_000", 00:21:06.075 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:06.075 "listen_address": { 00:21:06.075 "trtype": "TCP", 00:21:06.075 "adrfam": "IPv4", 00:21:06.075 "traddr": "10.0.0.2", 00:21:06.075 "trsvcid": "4420" 00:21:06.075 }, 00:21:06.075 "peer_address": { 00:21:06.075 "trtype": "TCP", 00:21:06.075 "adrfam": "IPv4", 00:21:06.075 "traddr": "10.0.0.1", 00:21:06.075 "trsvcid": "38524" 00:21:06.075 }, 00:21:06.075 "auth": { 00:21:06.075 "state": "completed", 00:21:06.075 "digest": "sha512", 00:21:06.075 "dhgroup": "null" 00:21:06.075 } 00:21:06.075 } 00:21:06.075 ]' 00:21:06.075 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.075 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.075 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.075 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:06.075 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.075 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.075 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.075 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.333 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:01:NDM3MjAyMTY5ZGZmOTQ4ZmZjYjQyMDc5NGU2NGRiNWO9PzHW: 00:21:06.333 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:01:NDM3MjAyMTY5ZGZmOTQ4ZmZjYjQyMDc5NGU2NGRiNWO9PzHW: 00:21:07.266 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.266 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:07.266 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.266 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.266 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.266 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.266 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:07.266 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:07.524 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:07.524 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.524 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:07.524 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:07.524 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:07.524 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.524 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:07.524 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.524 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.524 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.524 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:07.524 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:07.524 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:07.782 00:21:07.782 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.782 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.782 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.039 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.040 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.040 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.040 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.040 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.040 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.040 { 00:21:08.040 "cntlid": 103, 00:21:08.040 "qid": 0, 00:21:08.040 "state": "enabled", 00:21:08.040 "thread": "nvmf_tgt_poll_group_000", 00:21:08.040 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:08.040 "listen_address": { 00:21:08.040 "trtype": "TCP", 00:21:08.040 "adrfam": "IPv4", 00:21:08.040 "traddr": "10.0.0.2", 00:21:08.040 "trsvcid": "4420" 00:21:08.040 }, 00:21:08.040 "peer_address": { 00:21:08.040 "trtype": "TCP", 00:21:08.040 "adrfam": "IPv4", 00:21:08.040 "traddr": "10.0.0.1", 00:21:08.040 "trsvcid": "38542" 00:21:08.040 }, 00:21:08.040 "auth": { 00:21:08.040 "state": "completed", 00:21:08.040 "digest": "sha512", 00:21:08.040 "dhgroup": "null" 00:21:08.040 } 00:21:08.040 } 00:21:08.040 ]' 00:21:08.040 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.297 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:08.297 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.297 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:08.297 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.297 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.297 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.297 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.555 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:21:08.555 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:21:09.488 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.488 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:09.488 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.488 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.488 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.488 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:09.488 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.488 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:09.488 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:09.745 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:09.745 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.745 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:09.745 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:09.745 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:09.745 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.745 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.745 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.745 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.745 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.745 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.745 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.745 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.003 00:21:10.261 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.261 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.261 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.519 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.519 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.519 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.519 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.519 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.519 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.519 { 00:21:10.519 "cntlid": 105, 00:21:10.519 "qid": 0, 00:21:10.519 "state": "enabled", 00:21:10.519 "thread": "nvmf_tgt_poll_group_000", 00:21:10.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:10.519 "listen_address": { 00:21:10.519 "trtype": "TCP", 00:21:10.519 "adrfam": "IPv4", 00:21:10.519 "traddr": "10.0.0.2", 00:21:10.519 "trsvcid": "4420" 00:21:10.519 }, 00:21:10.519 "peer_address": { 00:21:10.519 "trtype": "TCP", 00:21:10.519 "adrfam": "IPv4", 00:21:10.519 "traddr": "10.0.0.1", 00:21:10.519 "trsvcid": "38564" 00:21:10.519 }, 00:21:10.519 "auth": { 00:21:10.519 "state": "completed", 00:21:10.519 "digest": "sha512", 00:21:10.519 "dhgroup": "ffdhe2048" 00:21:10.519 } 00:21:10.519 } 00:21:10.519 ]' 00:21:10.519 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.519 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:10.519 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.519 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:10.519 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.519 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.519 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.519 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.778 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:21:10.778 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:21:11.713 11:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.713 11:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:11.713 11:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.713 11:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.713 11:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.713 11:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.713 11:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:11.713 11:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:11.971 11:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:11.971 11:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.971 11:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:11.971 11:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:11.971 11:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:11.971 11:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.971 11:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.971 11:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.971 11:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.971 11:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.971 11:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.971 11:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.971 11:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.537 00:21:12.537 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.537 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.537 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.795 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.795 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.795 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.795 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.795 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.795 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.795 { 00:21:12.795 "cntlid": 107, 00:21:12.795 "qid": 0, 00:21:12.795 "state": "enabled", 00:21:12.795 "thread": "nvmf_tgt_poll_group_000", 00:21:12.795 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:12.795 "listen_address": { 00:21:12.795 "trtype": "TCP", 00:21:12.795 "adrfam": "IPv4", 00:21:12.795 "traddr": "10.0.0.2", 00:21:12.795 "trsvcid": "4420" 00:21:12.795 }, 00:21:12.795 "peer_address": { 00:21:12.795 "trtype": "TCP", 00:21:12.795 "adrfam": "IPv4", 00:21:12.795 "traddr": "10.0.0.1", 00:21:12.795 "trsvcid": "38592" 00:21:12.795 }, 00:21:12.795 "auth": { 00:21:12.795 "state": "completed", 00:21:12.795 "digest": "sha512", 00:21:12.795 "dhgroup": "ffdhe2048" 00:21:12.795 } 00:21:12.795 } 00:21:12.795 ]' 00:21:12.795 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.796 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.796 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.796 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:12.796 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.796 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.796 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.796 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.054 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: --dhchap-ctrl-secret DHHC-1:02:YWZiYzQwZThiZjZhNGQ0NDJkMjMyNzUxMzMyMmM1NTcyOGQ4OGJjZDgxZWM1NzBmtzwEkQ==: 00:21:13.054 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: --dhchap-ctrl-secret DHHC-1:02:YWZiYzQwZThiZjZhNGQ0NDJkMjMyNzUxMzMyMmM1NTcyOGQ4OGJjZDgxZWM1NzBmtzwEkQ==: 00:21:14.426 11:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.426 11:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:14.426 11:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.426 11:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.426 11:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.426 11:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.426 11:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:14.426 11:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:14.426 11:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:14.426 11:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.426 11:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:14.426 11:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:14.426 11:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:14.426 11:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.426 11:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.426 11:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.426 11:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.426 11:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.426 11:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.426 11:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.426 11:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.992 00:21:14.992 11:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.992 11:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.992 11:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.250 11:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.250 11:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.250 11:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.250 11:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.250 11:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.250 11:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.250 { 00:21:15.250 "cntlid": 109, 00:21:15.250 "qid": 0, 00:21:15.250 "state": "enabled", 00:21:15.250 "thread": "nvmf_tgt_poll_group_000", 00:21:15.250 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:15.250 "listen_address": { 00:21:15.250 "trtype": "TCP", 00:21:15.250 "adrfam": "IPv4", 00:21:15.250 "traddr": "10.0.0.2", 00:21:15.250 "trsvcid": "4420" 00:21:15.250 }, 00:21:15.250 "peer_address": { 00:21:15.250 "trtype": "TCP", 00:21:15.250 "adrfam": "IPv4", 00:21:15.250 "traddr": "10.0.0.1", 00:21:15.250 "trsvcid": "41118" 00:21:15.250 }, 00:21:15.250 "auth": { 00:21:15.250 "state": "completed", 00:21:15.250 "digest": "sha512", 00:21:15.250 "dhgroup": "ffdhe2048" 00:21:15.250 } 00:21:15.250 } 00:21:15.250 ]' 00:21:15.250 11:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.250 11:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.250 11:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.250 11:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:15.250 11:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.250 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.250 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.250 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.508 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:01:NDM3MjAyMTY5ZGZmOTQ4ZmZjYjQyMDc5NGU2NGRiNWO9PzHW: 00:21:15.508 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:01:NDM3MjAyMTY5ZGZmOTQ4ZmZjYjQyMDc5NGU2NGRiNWO9PzHW: 00:21:16.459 11:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.751 11:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:16.751 11:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.751 11:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.751 11:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.751 11:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.751 11:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:16.751 11:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:16.751 11:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:16.751 11:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.751 11:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:16.751 11:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:16.751 11:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:16.751 11:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.751 11:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:16.751 11:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.751 11:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.014 11:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.014 11:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:17.014 11:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:17.014 11:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:17.271 00:21:17.271 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.271 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.271 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.529 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.529 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.529 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.529 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.529 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.529 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.529 { 00:21:17.529 "cntlid": 111, 00:21:17.529 "qid": 0, 00:21:17.529 "state": "enabled", 00:21:17.529 "thread": "nvmf_tgt_poll_group_000", 00:21:17.529 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:17.529 "listen_address": { 00:21:17.529 "trtype": "TCP", 00:21:17.529 "adrfam": "IPv4", 00:21:17.529 "traddr": "10.0.0.2", 00:21:17.529 "trsvcid": "4420" 00:21:17.529 }, 00:21:17.529 "peer_address": { 00:21:17.529 "trtype": "TCP", 00:21:17.529 "adrfam": "IPv4", 00:21:17.529 "traddr": "10.0.0.1", 00:21:17.529 "trsvcid": "41160" 00:21:17.529 }, 00:21:17.529 "auth": { 00:21:17.529 "state": "completed", 00:21:17.529 "digest": "sha512", 00:21:17.529 "dhgroup": "ffdhe2048" 00:21:17.529 } 00:21:17.529 } 00:21:17.529 ]' 00:21:17.529 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.529 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:17.529 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.529 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:17.529 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.529 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.529 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.529 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.095 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:21:18.095 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:21:19.028 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.028 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.028 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.028 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.028 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.028 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:19.028 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.028 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:19.028 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:19.285 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:19.285 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.285 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:19.285 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:19.285 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:19.285 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.285 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.285 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.285 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.285 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.285 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.285 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.285 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.543 00:21:19.543 11:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.543 11:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.543 11:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.801 11:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.801 11:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.801 11:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.801 11:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.801 11:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.801 11:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.801 { 00:21:19.801 "cntlid": 113, 00:21:19.801 "qid": 0, 00:21:19.801 "state": "enabled", 00:21:19.801 "thread": "nvmf_tgt_poll_group_000", 00:21:19.801 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:19.801 "listen_address": { 00:21:19.801 "trtype": "TCP", 00:21:19.801 "adrfam": "IPv4", 00:21:19.801 "traddr": "10.0.0.2", 00:21:19.801 "trsvcid": "4420" 00:21:19.801 }, 00:21:19.801 "peer_address": { 00:21:19.801 "trtype": "TCP", 00:21:19.801 "adrfam": "IPv4", 00:21:19.801 "traddr": "10.0.0.1", 00:21:19.801 "trsvcid": "41192" 00:21:19.801 }, 00:21:19.801 "auth": { 00:21:19.801 "state": "completed", 00:21:19.801 "digest": "sha512", 00:21:19.801 "dhgroup": "ffdhe3072" 00:21:19.801 } 00:21:19.801 } 00:21:19.801 ]' 00:21:19.801 11:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.801 11:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.801 11:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.059 11:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:20.059 11:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.059 11:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.059 11:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.059 11:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.317 11:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:21:20.317 11:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:21:21.248 11:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.248 11:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.248 11:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.248 11:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.248 11:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.248 11:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.248 11:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:21.248 11:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:21.506 11:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:21.507 11:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.507 11:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:21.507 11:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:21.507 11:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:21.507 11:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.507 11:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.507 11:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.507 11:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.507 11:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.507 11:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.507 11:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.507 11:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.764 00:21:22.022 11:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.022 11:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.022 11:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.279 11:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.279 11:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.279 11:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.279 11:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.279 11:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.279 11:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.279 { 00:21:22.279 "cntlid": 115, 00:21:22.279 "qid": 0, 00:21:22.279 "state": "enabled", 00:21:22.279 "thread": "nvmf_tgt_poll_group_000", 00:21:22.280 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:22.280 "listen_address": { 00:21:22.280 "trtype": "TCP", 00:21:22.280 "adrfam": "IPv4", 00:21:22.280 "traddr": "10.0.0.2", 00:21:22.280 "trsvcid": "4420" 00:21:22.280 }, 00:21:22.280 "peer_address": { 00:21:22.280 "trtype": "TCP", 00:21:22.280 "adrfam": "IPv4", 00:21:22.280 "traddr": "10.0.0.1", 00:21:22.280 "trsvcid": "41232" 00:21:22.280 }, 00:21:22.280 "auth": { 00:21:22.280 "state": "completed", 00:21:22.280 "digest": "sha512", 00:21:22.280 "dhgroup": "ffdhe3072" 00:21:22.280 } 00:21:22.280 } 00:21:22.280 ]' 00:21:22.280 11:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.280 11:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.280 11:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.280 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:22.280 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.280 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.280 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.280 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.537 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: --dhchap-ctrl-secret DHHC-1:02:YWZiYzQwZThiZjZhNGQ0NDJkMjMyNzUxMzMyMmM1NTcyOGQ4OGJjZDgxZWM1NzBmtzwEkQ==: 00:21:22.537 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: --dhchap-ctrl-secret DHHC-1:02:YWZiYzQwZThiZjZhNGQ0NDJkMjMyNzUxMzMyMmM1NTcyOGQ4OGJjZDgxZWM1NzBmtzwEkQ==: 00:21:23.476 11:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.476 11:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.476 11:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.476 11:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.476 11:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.476 11:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.476 11:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:23.476 11:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:24.040 11:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:24.040 11:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.040 11:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:24.040 11:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:24.040 11:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:24.040 11:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.040 11:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.040 11:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.040 11:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.040 11:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.040 11:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.040 11:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.040 11:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.300 00:21:24.300 11:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.300 11:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.300 11:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.558 11:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.558 11:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.558 11:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.558 11:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.558 11:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.558 11:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.558 { 00:21:24.558 "cntlid": 117, 00:21:24.558 "qid": 0, 00:21:24.558 "state": "enabled", 00:21:24.558 "thread": "nvmf_tgt_poll_group_000", 00:21:24.558 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:24.558 "listen_address": { 00:21:24.558 "trtype": "TCP", 00:21:24.558 "adrfam": "IPv4", 00:21:24.558 "traddr": "10.0.0.2", 00:21:24.558 "trsvcid": "4420" 00:21:24.558 }, 00:21:24.558 "peer_address": { 00:21:24.558 "trtype": "TCP", 00:21:24.558 "adrfam": "IPv4", 00:21:24.558 "traddr": "10.0.0.1", 00:21:24.558 "trsvcid": "45310" 00:21:24.558 }, 00:21:24.558 "auth": { 00:21:24.558 "state": "completed", 00:21:24.558 "digest": "sha512", 00:21:24.558 "dhgroup": "ffdhe3072" 00:21:24.558 } 00:21:24.558 } 00:21:24.558 ]' 00:21:24.558 11:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.558 11:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.558 11:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.816 11:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:24.816 11:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.816 11:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.816 11:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.816 11:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.073 11:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:01:NDM3MjAyMTY5ZGZmOTQ4ZmZjYjQyMDc5NGU2NGRiNWO9PzHW: 00:21:25.073 11:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:01:NDM3MjAyMTY5ZGZmOTQ4ZmZjYjQyMDc5NGU2NGRiNWO9PzHW: 00:21:26.006 11:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.006 11:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.006 11:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.006 11:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.006 11:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.006 11:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.006 11:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:26.006 11:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:26.265 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:26.265 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.265 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:26.265 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:26.265 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:26.265 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.265 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:26.265 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.265 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.265 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.265 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:26.265 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.265 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.831 00:21:26.831 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.831 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.831 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.089 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.089 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.089 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.089 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.089 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.089 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.089 { 00:21:27.089 "cntlid": 119, 00:21:27.089 "qid": 0, 00:21:27.089 "state": "enabled", 00:21:27.089 "thread": "nvmf_tgt_poll_group_000", 00:21:27.089 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:27.089 "listen_address": { 00:21:27.089 "trtype": "TCP", 00:21:27.089 "adrfam": "IPv4", 00:21:27.089 "traddr": "10.0.0.2", 00:21:27.089 "trsvcid": "4420" 00:21:27.089 }, 00:21:27.089 "peer_address": { 00:21:27.089 "trtype": "TCP", 00:21:27.089 "adrfam": "IPv4", 00:21:27.089 "traddr": "10.0.0.1", 00:21:27.089 "trsvcid": "45322" 00:21:27.089 }, 00:21:27.089 "auth": { 00:21:27.089 "state": "completed", 00:21:27.089 "digest": "sha512", 00:21:27.089 "dhgroup": "ffdhe3072" 00:21:27.089 } 00:21:27.089 } 00:21:27.089 ]' 00:21:27.089 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.089 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.089 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.089 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:27.089 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.089 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.089 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.089 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.347 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:21:27.347 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:21:28.280 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.280 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.280 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.280 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.280 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.280 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:28.280 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.280 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:28.280 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:28.846 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:28.846 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.846 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:28.846 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:28.846 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:28.846 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.846 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.846 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.846 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.846 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.846 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.846 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.846 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.104 00:21:29.104 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.104 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.104 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.362 11:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.362 11:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.362 11:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.362 11:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.362 11:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.362 11:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.362 { 00:21:29.362 "cntlid": 121, 00:21:29.362 "qid": 0, 00:21:29.362 "state": "enabled", 00:21:29.362 "thread": "nvmf_tgt_poll_group_000", 00:21:29.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:29.362 "listen_address": { 00:21:29.362 "trtype": "TCP", 00:21:29.362 "adrfam": "IPv4", 00:21:29.362 "traddr": "10.0.0.2", 00:21:29.362 "trsvcid": "4420" 00:21:29.362 }, 00:21:29.362 "peer_address": { 00:21:29.362 "trtype": "TCP", 00:21:29.362 "adrfam": "IPv4", 00:21:29.362 "traddr": "10.0.0.1", 00:21:29.362 "trsvcid": "45346" 00:21:29.362 }, 00:21:29.362 "auth": { 00:21:29.362 "state": "completed", 00:21:29.362 "digest": "sha512", 00:21:29.362 "dhgroup": "ffdhe4096" 00:21:29.362 } 00:21:29.362 } 00:21:29.362 ]' 00:21:29.362 11:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.362 11:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.362 11:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.620 11:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:29.620 11:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.620 11:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.620 11:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.620 11:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.878 11:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:21:29.878 11:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:21:30.813 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.813 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.813 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.813 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.813 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.813 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.813 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:30.813 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:31.071 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:31.071 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.071 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:31.072 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:31.072 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:31.072 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.072 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.072 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.072 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.072 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.072 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.072 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.072 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.638 00:21:31.638 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.638 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.638 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.897 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.897 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.897 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.897 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.897 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.897 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.897 { 00:21:31.897 "cntlid": 123, 00:21:31.897 "qid": 0, 00:21:31.897 "state": "enabled", 00:21:31.897 "thread": "nvmf_tgt_poll_group_000", 00:21:31.897 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:31.897 "listen_address": { 00:21:31.897 "trtype": "TCP", 00:21:31.897 "adrfam": "IPv4", 00:21:31.897 "traddr": "10.0.0.2", 00:21:31.897 "trsvcid": "4420" 00:21:31.897 }, 00:21:31.897 "peer_address": { 00:21:31.897 "trtype": "TCP", 00:21:31.897 "adrfam": "IPv4", 00:21:31.897 "traddr": "10.0.0.1", 00:21:31.897 "trsvcid": "45360" 00:21:31.897 }, 00:21:31.897 "auth": { 00:21:31.897 "state": "completed", 00:21:31.897 "digest": "sha512", 00:21:31.897 "dhgroup": "ffdhe4096" 00:21:31.897 } 00:21:31.897 } 00:21:31.897 ]' 00:21:31.897 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.897 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.897 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.897 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:31.897 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.897 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.897 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.897 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.155 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: --dhchap-ctrl-secret DHHC-1:02:YWZiYzQwZThiZjZhNGQ0NDJkMjMyNzUxMzMyMmM1NTcyOGQ4OGJjZDgxZWM1NzBmtzwEkQ==: 00:21:32.414 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: --dhchap-ctrl-secret DHHC-1:02:YWZiYzQwZThiZjZhNGQ0NDJkMjMyNzUxMzMyMmM1NTcyOGQ4OGJjZDgxZWM1NzBmtzwEkQ==: 00:21:33.347 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.347 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.347 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.347 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.347 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.347 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.347 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:33.347 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:33.605 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:33.605 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.605 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:33.605 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:33.605 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:33.605 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.605 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.605 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.605 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.605 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.605 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.605 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.605 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.862 00:21:33.862 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.862 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.862 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.120 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.121 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.121 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.121 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.121 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.121 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.121 { 00:21:34.121 "cntlid": 125, 00:21:34.121 "qid": 0, 00:21:34.121 "state": "enabled", 00:21:34.121 "thread": "nvmf_tgt_poll_group_000", 00:21:34.121 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:34.121 "listen_address": { 00:21:34.121 "trtype": "TCP", 00:21:34.121 "adrfam": "IPv4", 00:21:34.121 "traddr": "10.0.0.2", 00:21:34.121 "trsvcid": "4420" 00:21:34.121 }, 00:21:34.121 "peer_address": { 00:21:34.121 "trtype": "TCP", 00:21:34.121 "adrfam": "IPv4", 00:21:34.121 "traddr": "10.0.0.1", 00:21:34.121 "trsvcid": "45386" 00:21:34.121 }, 00:21:34.121 "auth": { 00:21:34.121 "state": "completed", 00:21:34.121 "digest": "sha512", 00:21:34.121 "dhgroup": "ffdhe4096" 00:21:34.121 } 00:21:34.121 } 00:21:34.121 ]' 00:21:34.121 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.380 11:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.380 11:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:34.380 11:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:34.380 11:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:34.380 11:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.380 11:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.380 11:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.638 11:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:01:NDM3MjAyMTY5ZGZmOTQ4ZmZjYjQyMDc5NGU2NGRiNWO9PzHW: 00:21:34.638 11:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:01:NDM3MjAyMTY5ZGZmOTQ4ZmZjYjQyMDc5NGU2NGRiNWO9PzHW: 00:21:35.571 11:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.571 11:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.571 11:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.571 11:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.571 11:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.571 11:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.571 11:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:35.571 11:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:35.829 11:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:35.829 11:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:35.829 11:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:35.829 11:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:35.829 11:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:35.829 11:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.829 11:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:35.829 11:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.829 11:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.829 11:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.829 11:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:35.829 11:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:35.829 11:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:36.395 00:21:36.395 11:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.395 11:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.395 11:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.654 11:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.654 11:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.654 11:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.654 11:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.654 11:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.654 11:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.654 { 00:21:36.654 "cntlid": 127, 00:21:36.654 "qid": 0, 00:21:36.654 "state": "enabled", 00:21:36.654 "thread": "nvmf_tgt_poll_group_000", 00:21:36.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:36.654 "listen_address": { 00:21:36.654 "trtype": "TCP", 00:21:36.654 "adrfam": "IPv4", 00:21:36.654 "traddr": "10.0.0.2", 00:21:36.654 "trsvcid": "4420" 00:21:36.654 }, 00:21:36.654 "peer_address": { 00:21:36.654 "trtype": "TCP", 00:21:36.654 "adrfam": "IPv4", 00:21:36.654 "traddr": "10.0.0.1", 00:21:36.654 "trsvcid": "50052" 00:21:36.654 }, 00:21:36.654 "auth": { 00:21:36.654 "state": "completed", 00:21:36.654 "digest": "sha512", 00:21:36.654 "dhgroup": "ffdhe4096" 00:21:36.654 } 00:21:36.654 } 00:21:36.654 ]' 00:21:36.654 11:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.654 11:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.654 11:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.654 11:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:36.654 11:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.654 11:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.654 11:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.654 11:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.912 11:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:21:36.912 11:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:21:37.845 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.845 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:37.845 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.845 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.845 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.845 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:37.845 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:37.845 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:37.845 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:38.410 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:38.410 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.410 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:38.410 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:38.410 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:38.410 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.410 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.410 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.410 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.410 11:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.410 11:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.410 11:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.410 11:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.976 00:21:38.977 11:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.977 11:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.977 11:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.977 11:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.977 11:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.977 11:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.977 11:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.977 11:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.977 11:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.977 { 00:21:38.977 "cntlid": 129, 00:21:38.977 "qid": 0, 00:21:38.977 "state": "enabled", 00:21:38.977 "thread": "nvmf_tgt_poll_group_000", 00:21:38.977 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:38.977 "listen_address": { 00:21:38.977 "trtype": "TCP", 00:21:38.977 "adrfam": "IPv4", 00:21:38.977 "traddr": "10.0.0.2", 00:21:38.977 "trsvcid": "4420" 00:21:38.977 }, 00:21:38.977 "peer_address": { 00:21:38.977 "trtype": "TCP", 00:21:38.977 "adrfam": "IPv4", 00:21:38.977 "traddr": "10.0.0.1", 00:21:38.977 "trsvcid": "50080" 00:21:38.977 }, 00:21:38.977 "auth": { 00:21:38.977 "state": "completed", 00:21:38.977 "digest": "sha512", 00:21:38.977 "dhgroup": "ffdhe6144" 00:21:38.977 } 00:21:38.977 } 00:21:38.977 ]' 00:21:38.977 11:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:39.235 11:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.235 11:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.235 11:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:39.235 11:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.235 11:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.235 11:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.235 11:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.493 11:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:21:39.493 11:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:21:40.425 11:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.425 11:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:40.425 11:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.425 11:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.425 11:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.425 11:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.425 11:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:40.425 11:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:40.683 11:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:40.683 11:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.683 11:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:40.683 11:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:40.683 11:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:40.683 11:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.683 11:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.683 11:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.683 11:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.683 11:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.683 11:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.683 11:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.683 11:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.249 00:21:41.249 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.249 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.249 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.506 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.506 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.506 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.506 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.506 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.507 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.507 { 00:21:41.507 "cntlid": 131, 00:21:41.507 "qid": 0, 00:21:41.507 "state": "enabled", 00:21:41.507 "thread": "nvmf_tgt_poll_group_000", 00:21:41.507 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:41.507 "listen_address": { 00:21:41.507 "trtype": "TCP", 00:21:41.507 "adrfam": "IPv4", 00:21:41.507 "traddr": "10.0.0.2", 00:21:41.507 "trsvcid": "4420" 00:21:41.507 }, 00:21:41.507 "peer_address": { 00:21:41.507 "trtype": "TCP", 00:21:41.507 "adrfam": "IPv4", 00:21:41.507 "traddr": "10.0.0.1", 00:21:41.507 "trsvcid": "50116" 00:21:41.507 }, 00:21:41.507 "auth": { 00:21:41.507 "state": "completed", 00:21:41.507 "digest": "sha512", 00:21:41.507 "dhgroup": "ffdhe6144" 00:21:41.507 } 00:21:41.507 } 00:21:41.507 ]' 00:21:41.507 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.507 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.507 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.763 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:41.763 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.763 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.763 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.763 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.020 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: --dhchap-ctrl-secret DHHC-1:02:YWZiYzQwZThiZjZhNGQ0NDJkMjMyNzUxMzMyMmM1NTcyOGQ4OGJjZDgxZWM1NzBmtzwEkQ==: 00:21:42.020 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: --dhchap-ctrl-secret DHHC-1:02:YWZiYzQwZThiZjZhNGQ0NDJkMjMyNzUxMzMyMmM1NTcyOGQ4OGJjZDgxZWM1NzBmtzwEkQ==: 00:21:42.953 11:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.953 11:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:42.953 11:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.953 11:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.953 11:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.953 11:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.953 11:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:42.953 11:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:43.212 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:43.212 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.212 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:43.212 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:43.212 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:43.212 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.212 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.212 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.212 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.212 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.212 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.212 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.212 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.777 00:21:43.777 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:43.777 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.777 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.343 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.343 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.343 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.343 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.343 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.343 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.343 { 00:21:44.343 "cntlid": 133, 00:21:44.343 "qid": 0, 00:21:44.343 "state": "enabled", 00:21:44.343 "thread": "nvmf_tgt_poll_group_000", 00:21:44.343 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:44.343 "listen_address": { 00:21:44.343 "trtype": "TCP", 00:21:44.343 "adrfam": "IPv4", 00:21:44.343 "traddr": "10.0.0.2", 00:21:44.343 "trsvcid": "4420" 00:21:44.343 }, 00:21:44.343 "peer_address": { 00:21:44.343 "trtype": "TCP", 00:21:44.343 "adrfam": "IPv4", 00:21:44.343 "traddr": "10.0.0.1", 00:21:44.343 "trsvcid": "50148" 00:21:44.343 }, 00:21:44.343 "auth": { 00:21:44.343 "state": "completed", 00:21:44.343 "digest": "sha512", 00:21:44.343 "dhgroup": "ffdhe6144" 00:21:44.343 } 00:21:44.343 } 00:21:44.343 ]' 00:21:44.343 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.343 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.343 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.343 11:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:44.343 11:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.343 11:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.343 11:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.343 11:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.601 11:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:01:NDM3MjAyMTY5ZGZmOTQ4ZmZjYjQyMDc5NGU2NGRiNWO9PzHW: 00:21:44.601 11:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:01:NDM3MjAyMTY5ZGZmOTQ4ZmZjYjQyMDc5NGU2NGRiNWO9PzHW: 00:21:45.534 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.534 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.534 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.534 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.534 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.534 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:45.534 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:45.534 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:45.792 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:45.792 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:45.792 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:45.792 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:45.792 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:45.792 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.793 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:45.793 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.793 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.793 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.793 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:45.793 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:45.793 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:46.415 00:21:46.415 11:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.415 11:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.415 11:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.673 11:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.673 11:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.673 11:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.673 11:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.673 11:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.673 11:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.673 { 00:21:46.673 "cntlid": 135, 00:21:46.673 "qid": 0, 00:21:46.673 "state": "enabled", 00:21:46.673 "thread": "nvmf_tgt_poll_group_000", 00:21:46.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:46.673 "listen_address": { 00:21:46.673 "trtype": "TCP", 00:21:46.673 "adrfam": "IPv4", 00:21:46.673 "traddr": "10.0.0.2", 00:21:46.673 "trsvcid": "4420" 00:21:46.673 }, 00:21:46.673 "peer_address": { 00:21:46.673 "trtype": "TCP", 00:21:46.673 "adrfam": "IPv4", 00:21:46.673 "traddr": "10.0.0.1", 00:21:46.673 "trsvcid": "32872" 00:21:46.673 }, 00:21:46.673 "auth": { 00:21:46.673 "state": "completed", 00:21:46.673 "digest": "sha512", 00:21:46.673 "dhgroup": "ffdhe6144" 00:21:46.673 } 00:21:46.673 } 00:21:46.673 ]' 00:21:46.673 11:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:46.673 11:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:46.673 11:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:46.673 11:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:46.673 11:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:46.931 11:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.931 11:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.931 11:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.190 11:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:21:47.190 11:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:21:48.124 11:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.124 11:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:48.124 11:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.124 11:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.124 11:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.124 11:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:48.124 11:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:48.124 11:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:48.124 11:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:48.382 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:48.382 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.382 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:48.382 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:48.382 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:48.382 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.382 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.382 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.382 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.382 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.382 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.382 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.382 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.316 00:21:49.316 11:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.316 11:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.316 11:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.573 11:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.573 11:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.573 11:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.573 11:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.573 11:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.573 11:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.573 { 00:21:49.573 "cntlid": 137, 00:21:49.573 "qid": 0, 00:21:49.573 "state": "enabled", 00:21:49.573 "thread": "nvmf_tgt_poll_group_000", 00:21:49.573 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:49.573 "listen_address": { 00:21:49.573 "trtype": "TCP", 00:21:49.573 "adrfam": "IPv4", 00:21:49.573 "traddr": "10.0.0.2", 00:21:49.573 "trsvcid": "4420" 00:21:49.573 }, 00:21:49.573 "peer_address": { 00:21:49.573 "trtype": "TCP", 00:21:49.573 "adrfam": "IPv4", 00:21:49.573 "traddr": "10.0.0.1", 00:21:49.573 "trsvcid": "32906" 00:21:49.573 }, 00:21:49.573 "auth": { 00:21:49.573 "state": "completed", 00:21:49.573 "digest": "sha512", 00:21:49.573 "dhgroup": "ffdhe8192" 00:21:49.573 } 00:21:49.573 } 00:21:49.573 ]' 00:21:49.573 11:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.573 11:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.573 11:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.573 11:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:49.573 11:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.573 11:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.573 11:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.573 11:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.139 11:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:21:50.139 11:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:21:51.073 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.073 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.073 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.073 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.073 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.073 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:51.073 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:51.073 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:51.331 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:51.331 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.331 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:51.331 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:51.331 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:51.331 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.331 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.331 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.331 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.331 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.331 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.331 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.331 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.265 00:21:52.265 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.265 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.265 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.523 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.523 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.523 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.523 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.523 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.523 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.523 { 00:21:52.523 "cntlid": 139, 00:21:52.523 "qid": 0, 00:21:52.523 "state": "enabled", 00:21:52.523 "thread": "nvmf_tgt_poll_group_000", 00:21:52.523 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:52.523 "listen_address": { 00:21:52.523 "trtype": "TCP", 00:21:52.523 "adrfam": "IPv4", 00:21:52.523 "traddr": "10.0.0.2", 00:21:52.523 "trsvcid": "4420" 00:21:52.523 }, 00:21:52.523 "peer_address": { 00:21:52.523 "trtype": "TCP", 00:21:52.523 "adrfam": "IPv4", 00:21:52.523 "traddr": "10.0.0.1", 00:21:52.523 "trsvcid": "32942" 00:21:52.523 }, 00:21:52.523 "auth": { 00:21:52.523 "state": "completed", 00:21:52.523 "digest": "sha512", 00:21:52.523 "dhgroup": "ffdhe8192" 00:21:52.523 } 00:21:52.523 } 00:21:52.523 ]' 00:21:52.523 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.523 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.523 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.523 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:52.523 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.523 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.523 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.523 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.781 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: --dhchap-ctrl-secret DHHC-1:02:YWZiYzQwZThiZjZhNGQ0NDJkMjMyNzUxMzMyMmM1NTcyOGQ4OGJjZDgxZWM1NzBmtzwEkQ==: 00:21:52.781 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: --dhchap-ctrl-secret DHHC-1:02:YWZiYzQwZThiZjZhNGQ0NDJkMjMyNzUxMzMyMmM1NTcyOGQ4OGJjZDgxZWM1NzBmtzwEkQ==: 00:21:53.713 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.713 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:53.713 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.713 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.713 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.713 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:53.713 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:53.713 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:54.278 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:54.278 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:54.278 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:54.278 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:54.278 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:54.278 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.278 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.278 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.278 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.278 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.278 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.278 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.278 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.212 00:21:55.212 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:55.212 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:55.212 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.470 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.470 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.470 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.470 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.470 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.470 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:55.470 { 00:21:55.470 "cntlid": 141, 00:21:55.470 "qid": 0, 00:21:55.470 "state": "enabled", 00:21:55.470 "thread": "nvmf_tgt_poll_group_000", 00:21:55.470 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:55.470 "listen_address": { 00:21:55.470 "trtype": "TCP", 00:21:55.470 "adrfam": "IPv4", 00:21:55.470 "traddr": "10.0.0.2", 00:21:55.470 "trsvcid": "4420" 00:21:55.470 }, 00:21:55.470 "peer_address": { 00:21:55.470 "trtype": "TCP", 00:21:55.470 "adrfam": "IPv4", 00:21:55.470 "traddr": "10.0.0.1", 00:21:55.470 "trsvcid": "48400" 00:21:55.470 }, 00:21:55.470 "auth": { 00:21:55.470 "state": "completed", 00:21:55.470 "digest": "sha512", 00:21:55.470 "dhgroup": "ffdhe8192" 00:21:55.470 } 00:21:55.470 } 00:21:55.470 ]' 00:21:55.470 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:55.470 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.470 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:55.470 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:55.470 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:55.470 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.470 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.470 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.728 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:01:NDM3MjAyMTY5ZGZmOTQ4ZmZjYjQyMDc5NGU2NGRiNWO9PzHW: 00:21:55.728 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:01:NDM3MjAyMTY5ZGZmOTQ4ZmZjYjQyMDc5NGU2NGRiNWO9PzHW: 00:21:57.100 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.100 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:57.100 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.100 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.100 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.100 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:57.100 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:57.100 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:57.100 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:57.100 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.100 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:57.100 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:57.100 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:57.100 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.100 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:57.100 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.100 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.100 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.100 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:57.100 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:57.100 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:58.031 00:21:58.031 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:58.031 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:58.031 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.289 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.289 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.289 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.289 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.289 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.289 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:58.289 { 00:21:58.289 "cntlid": 143, 00:21:58.289 "qid": 0, 00:21:58.289 "state": "enabled", 00:21:58.289 "thread": "nvmf_tgt_poll_group_000", 00:21:58.289 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:58.289 "listen_address": { 00:21:58.289 "trtype": "TCP", 00:21:58.289 "adrfam": "IPv4", 00:21:58.289 "traddr": "10.0.0.2", 00:21:58.289 "trsvcid": "4420" 00:21:58.289 }, 00:21:58.289 "peer_address": { 00:21:58.289 "trtype": "TCP", 00:21:58.289 "adrfam": "IPv4", 00:21:58.289 "traddr": "10.0.0.1", 00:21:58.289 "trsvcid": "48426" 00:21:58.289 }, 00:21:58.289 "auth": { 00:21:58.289 "state": "completed", 00:21:58.289 "digest": "sha512", 00:21:58.289 "dhgroup": "ffdhe8192" 00:21:58.289 } 00:21:58.289 } 00:21:58.289 ]' 00:21:58.289 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:58.289 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.289 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:58.547 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:58.547 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:58.547 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.547 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.547 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.805 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:21:58.805 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:21:59.739 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.739 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:59.739 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.739 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.739 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.739 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:59.739 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:59.739 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:59.739 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:59.739 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:59.739 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:59.997 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:59.997 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:59.997 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:59.997 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:59.997 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:59.997 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.997 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.997 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.997 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.997 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.997 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.997 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.997 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.930 00:22:00.930 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:00.930 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:00.930 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.188 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.188 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.188 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.188 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.188 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.188 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:01.188 { 00:22:01.188 "cntlid": 145, 00:22:01.188 "qid": 0, 00:22:01.188 "state": "enabled", 00:22:01.188 "thread": "nvmf_tgt_poll_group_000", 00:22:01.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:01.188 "listen_address": { 00:22:01.188 "trtype": "TCP", 00:22:01.188 "adrfam": "IPv4", 00:22:01.188 "traddr": "10.0.0.2", 00:22:01.188 "trsvcid": "4420" 00:22:01.188 }, 00:22:01.188 "peer_address": { 00:22:01.188 "trtype": "TCP", 00:22:01.188 "adrfam": "IPv4", 00:22:01.188 "traddr": "10.0.0.1", 00:22:01.188 "trsvcid": "48454" 00:22:01.188 }, 00:22:01.188 "auth": { 00:22:01.188 "state": "completed", 00:22:01.188 "digest": "sha512", 00:22:01.188 "dhgroup": "ffdhe8192" 00:22:01.188 } 00:22:01.188 } 00:22:01.188 ]' 00:22:01.188 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:01.188 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:01.188 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:01.446 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:01.446 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:01.446 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.446 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.446 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.704 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:22:01.704 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MGYyODAwNzM1ZWMwMDIwMjI2MDcyMTBiNDQxN2M3MTlhZTFiM2E2YTMzZThjNzExegqYIw==: --dhchap-ctrl-secret DHHC-1:03:ZDM0ZWZiNjYxZjk1ZmZmZTUzMjI5NjBiYzU1ZDFkMzcwMWI4ZGQyNGU2M2E5YTVhNTFlMDhhZjAwNmEzMGYwMnqVypg=: 00:22:02.637 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.637 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.637 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.637 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.637 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.637 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:02.637 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.637 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.637 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.637 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:02.637 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:02.637 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:02.637 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:02.637 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:02.637 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:02.637 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:02.637 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:02.637 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:02.638 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:03.573 request: 00:22:03.573 { 00:22:03.573 "name": "nvme0", 00:22:03.573 "trtype": "tcp", 00:22:03.573 "traddr": "10.0.0.2", 00:22:03.573 "adrfam": "ipv4", 00:22:03.573 "trsvcid": "4420", 00:22:03.573 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:03.573 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:03.573 "prchk_reftag": false, 00:22:03.573 "prchk_guard": false, 00:22:03.573 "hdgst": false, 00:22:03.573 "ddgst": false, 00:22:03.573 "dhchap_key": "key2", 00:22:03.573 "allow_unrecognized_csi": false, 00:22:03.573 "method": "bdev_nvme_attach_controller", 00:22:03.573 "req_id": 1 00:22:03.573 } 00:22:03.573 Got JSON-RPC error response 00:22:03.573 response: 00:22:03.573 { 00:22:03.573 "code": -5, 00:22:03.573 "message": "Input/output error" 00:22:03.573 } 00:22:03.573 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:03.573 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:03.573 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:03.573 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:03.573 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:03.573 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.573 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.573 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.573 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.573 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.573 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.573 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.573 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:03.573 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:03.573 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:03.573 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:03.573 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:03.573 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:03.573 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:03.573 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:03.573 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:03.573 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:04.505 request: 00:22:04.505 { 00:22:04.505 "name": "nvme0", 00:22:04.505 "trtype": "tcp", 00:22:04.505 "traddr": "10.0.0.2", 00:22:04.505 "adrfam": "ipv4", 00:22:04.505 "trsvcid": "4420", 00:22:04.505 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:04.505 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:04.505 "prchk_reftag": false, 00:22:04.505 "prchk_guard": false, 00:22:04.505 "hdgst": false, 00:22:04.505 "ddgst": false, 00:22:04.505 "dhchap_key": "key1", 00:22:04.505 "dhchap_ctrlr_key": "ckey2", 00:22:04.505 "allow_unrecognized_csi": false, 00:22:04.505 "method": "bdev_nvme_attach_controller", 00:22:04.505 "req_id": 1 00:22:04.505 } 00:22:04.505 Got JSON-RPC error response 00:22:04.505 response: 00:22:04.505 { 00:22:04.505 "code": -5, 00:22:04.505 "message": "Input/output error" 00:22:04.505 } 00:22:04.505 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:04.505 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:04.505 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:04.505 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:04.505 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:04.505 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.505 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.505 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.505 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:04.505 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.505 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.505 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.505 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.505 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:04.506 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.506 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:04.506 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:04.506 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:04.506 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:04.506 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.506 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.506 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.438 request: 00:22:05.438 { 00:22:05.438 "name": "nvme0", 00:22:05.438 "trtype": "tcp", 00:22:05.438 "traddr": "10.0.0.2", 00:22:05.438 "adrfam": "ipv4", 00:22:05.438 "trsvcid": "4420", 00:22:05.438 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:05.438 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:05.438 "prchk_reftag": false, 00:22:05.438 "prchk_guard": false, 00:22:05.438 "hdgst": false, 00:22:05.438 "ddgst": false, 00:22:05.438 "dhchap_key": "key1", 00:22:05.438 "dhchap_ctrlr_key": "ckey1", 00:22:05.438 "allow_unrecognized_csi": false, 00:22:05.438 "method": "bdev_nvme_attach_controller", 00:22:05.438 "req_id": 1 00:22:05.438 } 00:22:05.438 Got JSON-RPC error response 00:22:05.438 response: 00:22:05.438 { 00:22:05.438 "code": -5, 00:22:05.438 "message": "Input/output error" 00:22:05.438 } 00:22:05.438 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:05.438 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:05.438 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:05.438 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:05.438 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:05.438 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.438 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.438 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.439 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2962770 00:22:05.439 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2962770 ']' 00:22:05.439 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2962770 00:22:05.439 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:05.439 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:05.439 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2962770 00:22:05.439 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:05.439 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:05.439 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2962770' 00:22:05.439 killing process with pid 2962770 00:22:05.439 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2962770 00:22:05.439 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2962770 00:22:06.371 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:06.371 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:06.371 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:06.371 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.371 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2986308 00:22:06.371 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:06.372 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2986308 00:22:06.372 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2986308 ']' 00:22:06.372 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.372 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:06.372 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.372 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:06.372 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.745 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:07.745 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:07.745 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:07.745 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:07.745 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.745 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:07.745 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:07.745 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2986308 00:22:07.745 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2986308 ']' 00:22:07.745 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.745 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:07.745 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.745 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:07.745 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.003 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:08.003 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:08.003 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:08.003 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.003 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.261 null0 00:22:08.261 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.261 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:08.261 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.IcZ 00:22:08.261 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.261 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.261 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.261 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.wwx ]] 00:22:08.261 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wwx 00:22:08.261 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.261 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.261 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.261 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:08.261 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.IJ9 00:22:08.261 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.261 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.261 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.261 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.lvp ]] 00:22:08.261 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.lvp 00:22:08.261 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.261 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.261 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.261 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:08.261 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.aWH 00:22:08.261 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.261 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.261 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.261 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.snA ]] 00:22:08.261 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.snA 00:22:08.261 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.261 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.519 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.519 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:08.519 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Cy2 00:22:08.519 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.519 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.519 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.519 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:08.519 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:08.519 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:08.519 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:08.519 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:08.519 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:08.519 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.519 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:08.519 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.519 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.519 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.519 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:08.519 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:08.519 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:09.893 nvme0n1 00:22:09.893 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:09.893 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:09.893 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.459 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.459 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.459 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.459 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.459 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.459 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:10.459 { 00:22:10.459 "cntlid": 1, 00:22:10.459 "qid": 0, 00:22:10.459 "state": "enabled", 00:22:10.459 "thread": "nvmf_tgt_poll_group_000", 00:22:10.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:10.459 "listen_address": { 00:22:10.459 "trtype": "TCP", 00:22:10.459 "adrfam": "IPv4", 00:22:10.459 "traddr": "10.0.0.2", 00:22:10.459 "trsvcid": "4420" 00:22:10.459 }, 00:22:10.459 "peer_address": { 00:22:10.459 "trtype": "TCP", 00:22:10.459 "adrfam": "IPv4", 00:22:10.459 "traddr": "10.0.0.1", 00:22:10.459 "trsvcid": "38914" 00:22:10.459 }, 00:22:10.459 "auth": { 00:22:10.459 "state": "completed", 00:22:10.459 "digest": "sha512", 00:22:10.459 "dhgroup": "ffdhe8192" 00:22:10.459 } 00:22:10.459 } 00:22:10.459 ]' 00:22:10.459 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:10.459 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:10.459 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:10.459 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:10.459 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:10.459 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.459 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.459 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.717 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:22:10.717 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:22:11.650 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.650 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:11.650 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.650 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.650 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.650 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:11.650 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.650 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.650 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.650 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:11.650 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:11.907 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:11.907 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:11.907 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:11.907 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:11.907 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:11.907 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:11.907 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:11.907 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:11.908 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:11.908 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:12.165 request: 00:22:12.165 { 00:22:12.165 "name": "nvme0", 00:22:12.165 "trtype": "tcp", 00:22:12.165 "traddr": "10.0.0.2", 00:22:12.165 "adrfam": "ipv4", 00:22:12.165 "trsvcid": "4420", 00:22:12.165 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:12.165 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:12.165 "prchk_reftag": false, 00:22:12.165 "prchk_guard": false, 00:22:12.165 "hdgst": false, 00:22:12.165 "ddgst": false, 00:22:12.165 "dhchap_key": "key3", 00:22:12.165 "allow_unrecognized_csi": false, 00:22:12.165 "method": "bdev_nvme_attach_controller", 00:22:12.165 "req_id": 1 00:22:12.165 } 00:22:12.165 Got JSON-RPC error response 00:22:12.165 response: 00:22:12.165 { 00:22:12.165 "code": -5, 00:22:12.165 "message": "Input/output error" 00:22:12.165 } 00:22:12.165 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:12.165 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:12.165 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:12.165 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:12.165 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:12.165 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:12.165 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:12.165 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:12.423 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:12.423 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:12.423 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:12.423 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:12.423 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:12.423 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:12.423 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:12.423 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:12.424 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:12.424 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:12.682 request: 00:22:12.682 { 00:22:12.682 "name": "nvme0", 00:22:12.682 "trtype": "tcp", 00:22:12.682 "traddr": "10.0.0.2", 00:22:12.682 "adrfam": "ipv4", 00:22:12.682 "trsvcid": "4420", 00:22:12.682 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:12.682 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:12.682 "prchk_reftag": false, 00:22:12.682 "prchk_guard": false, 00:22:12.682 "hdgst": false, 00:22:12.682 "ddgst": false, 00:22:12.682 "dhchap_key": "key3", 00:22:12.682 "allow_unrecognized_csi": false, 00:22:12.682 "method": "bdev_nvme_attach_controller", 00:22:12.682 "req_id": 1 00:22:12.682 } 00:22:12.682 Got JSON-RPC error response 00:22:12.682 response: 00:22:12.682 { 00:22:12.682 "code": -5, 00:22:12.682 "message": "Input/output error" 00:22:12.682 } 00:22:12.682 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:12.682 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:12.682 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:12.682 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:12.940 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:12.940 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:12.940 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:12.940 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:12.940 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:12.940 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:13.198 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:13.198 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.198 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.198 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.198 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:13.198 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.198 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.198 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.198 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:13.198 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:13.198 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:13.198 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:13.198 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:13.198 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:13.198 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:13.198 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:13.198 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:13.198 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:13.763 request: 00:22:13.763 { 00:22:13.763 "name": "nvme0", 00:22:13.763 "trtype": "tcp", 00:22:13.763 "traddr": "10.0.0.2", 00:22:13.763 "adrfam": "ipv4", 00:22:13.763 "trsvcid": "4420", 00:22:13.763 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:13.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:13.763 "prchk_reftag": false, 00:22:13.763 "prchk_guard": false, 00:22:13.763 "hdgst": false, 00:22:13.763 "ddgst": false, 00:22:13.763 "dhchap_key": "key0", 00:22:13.763 "dhchap_ctrlr_key": "key1", 00:22:13.763 "allow_unrecognized_csi": false, 00:22:13.763 "method": "bdev_nvme_attach_controller", 00:22:13.763 "req_id": 1 00:22:13.763 } 00:22:13.763 Got JSON-RPC error response 00:22:13.763 response: 00:22:13.763 { 00:22:13.763 "code": -5, 00:22:13.763 "message": "Input/output error" 00:22:13.763 } 00:22:13.763 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:13.763 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:13.763 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:13.763 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:13.763 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:13.763 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:13.763 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:14.020 nvme0n1 00:22:14.020 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:14.020 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:14.020 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.278 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.278 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.278 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.535 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:14.535 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.535 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.535 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.535 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:14.535 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:14.535 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:16.431 nvme0n1 00:22:16.431 11:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:16.431 11:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.431 11:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:16.431 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.431 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:16.431 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.431 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.431 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.431 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:16.431 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:16.431 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.689 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.689 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:22:16.689 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: --dhchap-ctrl-secret DHHC-1:03:NGM3NWRhODVjNzhlZGM4ZGNmZTM5Y2VmZDc5NGQ5YmE5NDAwZTUzMzQxNzZkMzQ0OWY4NDE1NDhkNzU4ZjJkOWsmC2o=: 00:22:17.622 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:17.622 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:17.622 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:17.622 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:17.622 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:17.622 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:17.622 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:17.622 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.622 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.885 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:17.885 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:17.885 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:17.885 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:17.885 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:17.885 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:17.885 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:17.885 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:17.885 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:17.885 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:18.899 request: 00:22:18.899 { 00:22:18.899 "name": "nvme0", 00:22:18.899 "trtype": "tcp", 00:22:18.899 "traddr": "10.0.0.2", 00:22:18.899 "adrfam": "ipv4", 00:22:18.899 "trsvcid": "4420", 00:22:18.899 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:18.899 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:18.899 "prchk_reftag": false, 00:22:18.899 "prchk_guard": false, 00:22:18.899 "hdgst": false, 00:22:18.899 "ddgst": false, 00:22:18.899 "dhchap_key": "key1", 00:22:18.899 "allow_unrecognized_csi": false, 00:22:18.899 "method": "bdev_nvme_attach_controller", 00:22:18.899 "req_id": 1 00:22:18.899 } 00:22:18.899 Got JSON-RPC error response 00:22:18.899 response: 00:22:18.899 { 00:22:18.899 "code": -5, 00:22:18.899 "message": "Input/output error" 00:22:18.899 } 00:22:18.899 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:18.899 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:18.899 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:18.899 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:18.899 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:18.899 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:18.900 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:20.273 nvme0n1 00:22:20.273 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:20.273 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:20.273 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.531 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.531 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.531 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.789 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:20.789 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.789 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.789 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.789 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:20.789 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:20.789 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:21.047 nvme0n1 00:22:21.047 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:21.047 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:21.047 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.305 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.305 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.305 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.563 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:21.563 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.563 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.563 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.563 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: '' 2s 00:22:21.563 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:21.563 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:21.563 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: 00:22:21.563 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:21.563 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:21.563 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:21.563 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: ]] 00:22:21.563 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YmRhOGZjOTc5OTczYWY4NjBkZmQ3MWZmZjE1MzhiNTZf9teL: 00:22:21.821 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:21.821 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:21.821 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:23.719 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:23.719 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:23.719 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:23.719 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:23.719 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:23.719 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:23.719 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:23.719 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:23.719 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.719 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.719 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.719 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: 2s 00:22:23.719 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:23.719 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:23.719 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:23.719 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: 00:22:23.719 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:23.719 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:23.719 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:23.719 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: ]] 00:22:23.719 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:Yjc2OWRkNzE0NmUyOTQzYWFjNDBhZmIyYTg2YjNiMmViNmE5NTk0MTY2MmUwNmU1WKPGtw==: 00:22:23.719 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:23.719 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:25.618 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:25.618 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:25.618 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:25.618 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:25.618 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:25.618 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:25.876 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:25.876 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.876 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:25.876 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.876 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.876 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.876 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:25.876 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:25.876 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:27.249 nvme0n1 00:22:27.249 11:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:27.249 11:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.249 11:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.249 11:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.249 11:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:27.249 11:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:28.182 11:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:28.182 11:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:28.182 11:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.440 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.440 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:28.440 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.440 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.440 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.440 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:28.440 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:28.698 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:28.698 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.698 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:28.956 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.956 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:28.956 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.956 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.956 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.956 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:28.956 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:28.956 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:28.956 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:28.956 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:28.956 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:28.956 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:28.956 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:28.956 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:29.889 request: 00:22:29.889 { 00:22:29.889 "name": "nvme0", 00:22:29.889 "dhchap_key": "key1", 00:22:29.889 "dhchap_ctrlr_key": "key3", 00:22:29.889 "method": "bdev_nvme_set_keys", 00:22:29.889 "req_id": 1 00:22:29.889 } 00:22:29.889 Got JSON-RPC error response 00:22:29.889 response: 00:22:29.889 { 00:22:29.889 "code": -13, 00:22:29.889 "message": "Permission denied" 00:22:29.889 } 00:22:29.889 11:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:29.889 11:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:29.889 11:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:29.889 11:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:29.889 11:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:29.889 11:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.889 11:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:30.147 11:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:30.147 11:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:31.519 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:31.519 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:31.519 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.519 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:31.519 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:31.519 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.519 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.519 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.519 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:31.519 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:31.519 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:32.891 nvme0n1 00:22:32.891 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:32.891 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.891 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.891 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.891 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:32.891 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:32.891 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:32.891 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:32.891 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:32.891 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:32.891 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:32.891 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:32.891 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:33.824 request: 00:22:33.824 { 00:22:33.824 "name": "nvme0", 00:22:33.824 "dhchap_key": "key2", 00:22:33.824 "dhchap_ctrlr_key": "key0", 00:22:33.824 "method": "bdev_nvme_set_keys", 00:22:33.824 "req_id": 1 00:22:33.824 } 00:22:33.824 Got JSON-RPC error response 00:22:33.824 response: 00:22:33.824 { 00:22:33.824 "code": -13, 00:22:33.824 "message": "Permission denied" 00:22:33.824 } 00:22:33.824 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:33.824 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:33.824 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:33.824 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:33.824 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:33.824 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:33.824 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:34.082 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:34.082 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:35.455 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:35.455 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:35.455 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.455 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:35.455 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:35.455 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:35.455 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2962920 00:22:35.455 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2962920 ']' 00:22:35.455 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2962920 00:22:35.455 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:35.455 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:35.455 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2962920 00:22:35.455 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:35.455 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:35.455 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2962920' 00:22:35.455 killing process with pid 2962920 00:22:35.455 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2962920 00:22:35.455 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2962920 00:22:37.984 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:37.984 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:37.984 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:37.984 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:37.984 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:37.984 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:37.984 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:37.984 rmmod nvme_tcp 00:22:37.984 rmmod nvme_fabrics 00:22:37.984 rmmod nvme_keyring 00:22:37.984 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:37.984 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:37.984 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:37.984 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2986308 ']' 00:22:37.984 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2986308 00:22:37.984 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2986308 ']' 00:22:37.984 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2986308 00:22:37.984 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:37.984 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:37.984 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2986308 00:22:37.984 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:37.984 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:37.984 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2986308' 00:22:37.984 killing process with pid 2986308 00:22:37.984 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2986308 00:22:37.984 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2986308 00:22:39.359 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:39.359 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:39.359 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:39.359 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:39.359 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:22:39.359 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:39.359 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:39.359 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:39.359 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:39.359 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.359 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:39.359 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.263 11:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:41.263 11:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.IcZ /tmp/spdk.key-sha256.IJ9 /tmp/spdk.key-sha384.aWH /tmp/spdk.key-sha512.Cy2 /tmp/spdk.key-sha512.wwx /tmp/spdk.key-sha384.lvp /tmp/spdk.key-sha256.snA '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:41.263 00:22:41.263 real 3m46.547s 00:22:41.263 user 8m45.556s 00:22:41.263 sys 0m27.470s 00:22:41.263 11:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:41.263 11:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.263 ************************************ 00:22:41.263 END TEST nvmf_auth_target 00:22:41.263 ************************************ 00:22:41.263 11:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:41.263 11:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:41.263 11:51:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:41.263 11:51:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:41.263 11:51:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:41.263 ************************************ 00:22:41.263 START TEST nvmf_bdevio_no_huge 00:22:41.263 ************************************ 00:22:41.263 11:51:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:41.263 * Looking for test storage... 00:22:41.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:41.263 11:51:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:41.263 11:51:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:22:41.263 11:51:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:41.263 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:41.263 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:41.263 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:41.263 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:41.263 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:41.263 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:41.263 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:41.263 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:41.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.264 --rc genhtml_branch_coverage=1 00:22:41.264 --rc genhtml_function_coverage=1 00:22:41.264 --rc genhtml_legend=1 00:22:41.264 --rc geninfo_all_blocks=1 00:22:41.264 --rc geninfo_unexecuted_blocks=1 00:22:41.264 00:22:41.264 ' 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:41.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.264 --rc genhtml_branch_coverage=1 00:22:41.264 --rc genhtml_function_coverage=1 00:22:41.264 --rc genhtml_legend=1 00:22:41.264 --rc geninfo_all_blocks=1 00:22:41.264 --rc geninfo_unexecuted_blocks=1 00:22:41.264 00:22:41.264 ' 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:41.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.264 --rc genhtml_branch_coverage=1 00:22:41.264 --rc genhtml_function_coverage=1 00:22:41.264 --rc genhtml_legend=1 00:22:41.264 --rc geninfo_all_blocks=1 00:22:41.264 --rc geninfo_unexecuted_blocks=1 00:22:41.264 00:22:41.264 ' 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:41.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.264 --rc genhtml_branch_coverage=1 00:22:41.264 --rc genhtml_function_coverage=1 00:22:41.264 --rc genhtml_legend=1 00:22:41.264 --rc geninfo_all_blocks=1 00:22:41.264 --rc geninfo_unexecuted_blocks=1 00:22:41.264 00:22:41.264 ' 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:41.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:41.264 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.265 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:41.265 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.265 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:41.265 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:41.265 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:41.265 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:43.794 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:43.794 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:43.794 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:43.795 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:43.795 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:43.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:43.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:22:43.795 00:22:43.795 --- 10.0.0.2 ping statistics --- 00:22:43.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.795 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:43.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:43.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:22:43.795 00:22:43.795 --- 10.0.0.1 ping statistics --- 00:22:43.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.795 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2992703 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2992703 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2992703 ']' 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:43.795 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:43.795 [2024-11-18 11:51:09.377589] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:43.795 [2024-11-18 11:51:09.377728] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:43.795 [2024-11-18 11:51:09.553758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:44.053 [2024-11-18 11:51:09.705133] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.053 [2024-11-18 11:51:09.705215] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.053 [2024-11-18 11:51:09.705241] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:44.053 [2024-11-18 11:51:09.705266] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:44.053 [2024-11-18 11:51:09.705286] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.053 [2024-11-18 11:51:09.707435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:44.053 [2024-11-18 11:51:09.707500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:44.053 [2024-11-18 11:51:09.707543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:44.053 [2024-11-18 11:51:09.707548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:44.619 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:44.619 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:22:44.619 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:44.619 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:44.619 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:44.619 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.619 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:44.619 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.619 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:44.619 [2024-11-18 11:51:10.364395] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.619 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.619 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:44.619 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.619 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:44.619 Malloc0 00:22:44.619 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.619 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:44.619 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.619 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:44.619 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.619 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:44.619 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.619 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:44.619 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.619 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:44.619 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.619 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:44.619 [2024-11-18 11:51:10.455270] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.619 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.619 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:44.619 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:44.619 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:44.619 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:44.619 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.619 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.619 { 00:22:44.619 "params": { 00:22:44.619 "name": "Nvme$subsystem", 00:22:44.619 "trtype": "$TEST_TRANSPORT", 00:22:44.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.619 "adrfam": "ipv4", 00:22:44.619 "trsvcid": "$NVMF_PORT", 00:22:44.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.619 "hdgst": ${hdgst:-false}, 00:22:44.619 "ddgst": ${ddgst:-false} 00:22:44.619 }, 00:22:44.619 "method": "bdev_nvme_attach_controller" 00:22:44.619 } 00:22:44.619 EOF 00:22:44.619 )") 00:22:44.619 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:44.619 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:44.619 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:44.619 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:44.619 "params": { 00:22:44.619 "name": "Nvme1", 00:22:44.619 "trtype": "tcp", 00:22:44.619 "traddr": "10.0.0.2", 00:22:44.619 "adrfam": "ipv4", 00:22:44.619 "trsvcid": "4420", 00:22:44.619 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.619 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:44.619 "hdgst": false, 00:22:44.619 "ddgst": false 00:22:44.619 }, 00:22:44.619 "method": "bdev_nvme_attach_controller" 00:22:44.619 }' 00:22:44.878 [2024-11-18 11:51:10.542149] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:44.878 [2024-11-18 11:51:10.542284] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2992859 ] 00:22:44.878 [2024-11-18 11:51:10.696403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:45.135 [2024-11-18 11:51:10.841043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.136 [2024-11-18 11:51:10.841083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.136 [2024-11-18 11:51:10.841092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:45.702 I/O targets: 00:22:45.702 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:45.702 00:22:45.702 00:22:45.702 CUnit - A unit testing framework for C - Version 2.1-3 00:22:45.702 http://cunit.sourceforge.net/ 00:22:45.702 00:22:45.702 00:22:45.702 Suite: bdevio tests on: Nvme1n1 00:22:45.702 Test: blockdev write read block ...passed 00:22:45.702 Test: blockdev write zeroes read block ...passed 00:22:45.702 Test: blockdev write zeroes read no split ...passed 00:22:45.702 Test: blockdev write zeroes read split ...passed 00:22:45.702 Test: blockdev write zeroes read split partial ...passed 00:22:45.702 Test: blockdev reset ...[2024-11-18 11:51:11.501544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:45.702 [2024-11-18 11:51:11.501720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f1100 (9): Bad file descriptor 00:22:45.702 [2024-11-18 11:51:11.559388] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:45.702 passed 00:22:45.702 Test: blockdev write read 8 blocks ...passed 00:22:45.702 Test: blockdev write read size > 128k ...passed 00:22:45.702 Test: blockdev write read invalid size ...passed 00:22:45.960 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:45.960 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:45.960 Test: blockdev write read max offset ...passed 00:22:45.960 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:45.960 Test: blockdev writev readv 8 blocks ...passed 00:22:45.960 Test: blockdev writev readv 30 x 1block ...passed 00:22:45.960 Test: blockdev writev readv block ...passed 00:22:45.960 Test: blockdev writev readv size > 128k ...passed 00:22:46.218 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:46.218 Test: blockdev comparev and writev ...[2024-11-18 11:51:11.859621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.218 [2024-11-18 11:51:11.859698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.218 [2024-11-18 11:51:11.859748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.218 [2024-11-18 11:51:11.859776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:46.218 [2024-11-18 11:51:11.860234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.218 [2024-11-18 11:51:11.860267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:46.218 [2024-11-18 11:51:11.860301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.218 [2024-11-18 11:51:11.860326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:46.218 [2024-11-18 11:51:11.860805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.218 [2024-11-18 11:51:11.860838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:46.218 [2024-11-18 11:51:11.860878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.218 [2024-11-18 11:51:11.860905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:46.218 [2024-11-18 11:51:11.861370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.218 [2024-11-18 11:51:11.861402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:46.218 [2024-11-18 11:51:11.861440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.218 [2024-11-18 11:51:11.861466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:46.218 passed 00:22:46.218 Test: blockdev nvme passthru rw ...passed 00:22:46.218 Test: blockdev nvme passthru vendor specific ...[2024-11-18 11:51:11.945903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:46.218 [2024-11-18 11:51:11.945966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.218 [2024-11-18 11:51:11.946201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:46.218 [2024-11-18 11:51:11.946233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:46.218 [2024-11-18 11:51:11.946416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:46.218 [2024-11-18 11:51:11.946465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:46.218 [2024-11-18 11:51:11.946679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:46.219 [2024-11-18 11:51:11.946711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:46.219 passed 00:22:46.219 Test: blockdev nvme admin passthru ...passed 00:22:46.219 Test: blockdev copy ...passed 00:22:46.219 00:22:46.219 Run Summary: Type Total Ran Passed Failed Inactive 00:22:46.219 suites 1 1 n/a 0 0 00:22:46.219 tests 23 23 23 0 0 00:22:46.219 asserts 152 152 152 0 n/a 00:22:46.219 00:22:46.219 Elapsed time = 1.398 seconds 00:22:47.153 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:47.153 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.153 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:47.153 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.153 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:47.153 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:47.153 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:47.153 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:47.153 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:47.153 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:47.153 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:47.153 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:47.153 rmmod nvme_tcp 00:22:47.153 rmmod nvme_fabrics 00:22:47.153 rmmod nvme_keyring 00:22:47.153 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:47.153 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:47.153 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:47.153 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2992703 ']' 00:22:47.153 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2992703 00:22:47.153 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2992703 ']' 00:22:47.153 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2992703 00:22:47.153 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:22:47.153 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:47.153 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2992703 00:22:47.153 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:22:47.153 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:22:47.153 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2992703' 00:22:47.153 killing process with pid 2992703 00:22:47.153 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2992703 00:22:47.153 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2992703 00:22:48.089 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:48.089 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:48.089 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:48.089 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:48.089 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:48.089 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:48.089 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:48.089 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:48.089 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:48.089 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.089 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:48.089 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:49.992 00:22:49.992 real 0m8.723s 00:22:49.992 user 0m19.924s 00:22:49.992 sys 0m2.940s 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:49.992 ************************************ 00:22:49.992 END TEST nvmf_bdevio_no_huge 00:22:49.992 ************************************ 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:49.992 ************************************ 00:22:49.992 START TEST nvmf_tls 00:22:49.992 ************************************ 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:49.992 * Looking for test storage... 00:22:49.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:49.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.992 --rc genhtml_branch_coverage=1 00:22:49.992 --rc genhtml_function_coverage=1 00:22:49.992 --rc genhtml_legend=1 00:22:49.992 --rc geninfo_all_blocks=1 00:22:49.992 --rc geninfo_unexecuted_blocks=1 00:22:49.992 00:22:49.992 ' 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:49.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.992 --rc genhtml_branch_coverage=1 00:22:49.992 --rc genhtml_function_coverage=1 00:22:49.992 --rc genhtml_legend=1 00:22:49.992 --rc geninfo_all_blocks=1 00:22:49.992 --rc geninfo_unexecuted_blocks=1 00:22:49.992 00:22:49.992 ' 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:49.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.992 --rc genhtml_branch_coverage=1 00:22:49.992 --rc genhtml_function_coverage=1 00:22:49.992 --rc genhtml_legend=1 00:22:49.992 --rc geninfo_all_blocks=1 00:22:49.992 --rc geninfo_unexecuted_blocks=1 00:22:49.992 00:22:49.992 ' 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:49.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.992 --rc genhtml_branch_coverage=1 00:22:49.992 --rc genhtml_function_coverage=1 00:22:49.992 --rc genhtml_legend=1 00:22:49.992 --rc geninfo_all_blocks=1 00:22:49.992 --rc geninfo_unexecuted_blocks=1 00:22:49.992 00:22:49.992 ' 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:49.992 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:49.993 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:49.993 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:49.993 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:49.993 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:49.993 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:49.993 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:49.993 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:49.993 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:49.993 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:49.993 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.993 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.993 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.993 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:49.993 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.993 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:49.993 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:49.993 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:49.993 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:49.993 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:49.993 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:49.993 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:49.993 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:49.993 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:49.993 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:49.993 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:50.251 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:50.251 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:50.251 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:50.251 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:50.251 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:50.251 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:50.251 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:50.251 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.251 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:50.251 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.251 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:50.251 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:50.251 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:50.251 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:52.154 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:52.154 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:52.154 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:52.154 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:52.154 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:52.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:52.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.328 ms 00:22:52.155 00:22:52.155 --- 10.0.0.2 ping statistics --- 00:22:52.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.155 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:52.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:52.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:22:52.155 00:22:52.155 --- 10.0.0.1 ping statistics --- 00:22:52.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.155 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2995142 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2995142 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2995142 ']' 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:52.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:52.155 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.155 [2024-11-18 11:51:18.031515] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:52.155 [2024-11-18 11:51:18.031655] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:52.414 [2024-11-18 11:51:18.186704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.671 [2024-11-18 11:51:18.323671] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:52.671 [2024-11-18 11:51:18.323751] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:52.671 [2024-11-18 11:51:18.323776] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:52.671 [2024-11-18 11:51:18.323800] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:52.671 [2024-11-18 11:51:18.323820] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:52.672 [2024-11-18 11:51:18.325400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.238 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:53.238 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:53.238 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:53.238 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:53.238 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.238 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:53.238 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:53.238 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:53.507 true 00:22:53.507 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:53.507 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:53.772 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:53.772 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:53.772 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:54.068 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:54.068 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:54.351 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:54.351 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:54.351 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:54.659 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:54.659 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:54.917 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:54.917 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:54.917 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:54.917 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:55.176 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:55.176 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:55.176 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:55.744 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:55.744 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:56.004 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:56.004 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:56.004 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:56.263 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:56.263 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:56.522 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:56.522 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:56.522 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:56.522 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:56.522 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:56.522 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:56.522 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:56.522 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:56.522 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:56.522 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:56.522 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:56.522 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:56.522 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:56.522 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:56.522 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:22:56.522 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:56.522 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:56.522 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:56.522 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:56.522 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.3XSUKJSDjw 00:22:56.522 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:56.522 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.jhs9fwUc5R 00:22:56.522 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:56.522 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:56.522 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.3XSUKJSDjw 00:22:56.522 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.jhs9fwUc5R 00:22:56.522 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:56.780 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:57.348 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.3XSUKJSDjw 00:22:57.348 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.3XSUKJSDjw 00:22:57.348 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:57.608 [2024-11-18 11:51:23.484434] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:57.867 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:58.124 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:58.383 [2024-11-18 11:51:24.033973] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:58.383 [2024-11-18 11:51:24.034317] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:58.383 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:58.641 malloc0 00:22:58.641 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:58.899 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.3XSUKJSDjw 00:22:59.156 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:59.417 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.3XSUKJSDjw 00:23:11.633 Initializing NVMe Controllers 00:23:11.633 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:11.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:11.633 Initialization complete. Launching workers. 00:23:11.633 ======================================================== 00:23:11.633 Latency(us) 00:23:11.633 Device Information : IOPS MiB/s Average min max 00:23:11.633 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5551.60 21.69 11532.53 2232.40 13256.56 00:23:11.633 ======================================================== 00:23:11.633 Total : 5551.60 21.69 11532.53 2232.40 13256.56 00:23:11.633 00:23:11.633 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3XSUKJSDjw 00:23:11.633 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:11.633 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:11.633 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:11.633 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.3XSUKJSDjw 00:23:11.633 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:11.633 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2997225 00:23:11.633 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:11.633 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:11.633 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2997225 /var/tmp/bdevperf.sock 00:23:11.633 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2997225 ']' 00:23:11.633 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:11.633 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:11.633 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:11.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:11.633 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:11.633 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.633 [2024-11-18 11:51:35.459195] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:11.633 [2024-11-18 11:51:35.459337] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2997225 ] 00:23:11.633 [2024-11-18 11:51:35.589415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.633 [2024-11-18 11:51:35.709383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:11.633 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:11.633 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:11.633 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3XSUKJSDjw 00:23:11.633 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:11.633 [2024-11-18 11:51:36.997713] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:11.633 TLSTESTn1 00:23:11.633 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:11.633 Running I/O for 10 seconds... 00:23:13.518 2517.00 IOPS, 9.83 MiB/s [2024-11-18T10:51:40.351Z] 2574.00 IOPS, 10.05 MiB/s [2024-11-18T10:51:41.290Z] 2590.67 IOPS, 10.12 MiB/s [2024-11-18T10:51:42.229Z] 2604.50 IOPS, 10.17 MiB/s [2024-11-18T10:51:43.611Z] 2611.20 IOPS, 10.20 MiB/s [2024-11-18T10:51:44.550Z] 2617.83 IOPS, 10.23 MiB/s [2024-11-18T10:51:45.489Z] 2622.14 IOPS, 10.24 MiB/s [2024-11-18T10:51:46.428Z] 2625.12 IOPS, 10.25 MiB/s [2024-11-18T10:51:47.368Z] 2627.11 IOPS, 10.26 MiB/s [2024-11-18T10:51:47.368Z] 2630.70 IOPS, 10.28 MiB/s 00:23:21.483 Latency(us) 00:23:21.483 [2024-11-18T10:51:47.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.483 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:21.483 Verification LBA range: start 0x0 length 0x2000 00:23:21.483 TLSTESTn1 : 10.03 2636.49 10.30 0.00 0.00 48460.65 9466.31 56700.78 00:23:21.483 [2024-11-18T10:51:47.368Z] =================================================================================================================== 00:23:21.483 [2024-11-18T10:51:47.368Z] Total : 2636.49 10.30 0.00 0.00 48460.65 9466.31 56700.78 00:23:21.483 { 00:23:21.483 "results": [ 00:23:21.483 { 00:23:21.483 "job": "TLSTESTn1", 00:23:21.483 "core_mask": "0x4", 00:23:21.483 "workload": "verify", 00:23:21.483 "status": "finished", 00:23:21.483 "verify_range": { 00:23:21.483 "start": 0, 00:23:21.483 "length": 8192 00:23:21.483 }, 00:23:21.483 "queue_depth": 128, 00:23:21.483 "io_size": 4096, 00:23:21.483 "runtime": 10.026588, 00:23:21.483 "iops": 2636.4901001217963, 00:23:21.483 "mibps": 10.298789453600767, 00:23:21.483 "io_failed": 0, 00:23:21.483 "io_timeout": 0, 00:23:21.483 "avg_latency_us": 48460.64659619332, 00:23:21.483 "min_latency_us": 9466.31111111111, 00:23:21.483 "max_latency_us": 56700.776296296295 00:23:21.483 } 00:23:21.483 ], 00:23:21.483 "core_count": 1 00:23:21.483 } 00:23:21.483 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:21.483 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2997225 00:23:21.483 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2997225 ']' 00:23:21.483 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2997225 00:23:21.483 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:21.483 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:21.483 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2997225 00:23:21.483 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:21.483 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:21.483 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2997225' 00:23:21.483 killing process with pid 2997225 00:23:21.483 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2997225 00:23:21.483 Received shutdown signal, test time was about 10.000000 seconds 00:23:21.483 00:23:21.483 Latency(us) 00:23:21.483 [2024-11-18T10:51:47.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.483 [2024-11-18T10:51:47.368Z] =================================================================================================================== 00:23:21.483 [2024-11-18T10:51:47.368Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:21.483 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2997225 00:23:22.419 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jhs9fwUc5R 00:23:22.419 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:22.419 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jhs9fwUc5R 00:23:22.419 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:22.419 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:22.419 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:22.419 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:22.419 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jhs9fwUc5R 00:23:22.419 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:22.419 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:22.419 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:22.419 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.jhs9fwUc5R 00:23:22.419 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:22.419 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2998681 00:23:22.419 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:22.419 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:22.419 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2998681 /var/tmp/bdevperf.sock 00:23:22.419 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2998681 ']' 00:23:22.419 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:22.419 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:22.419 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:22.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:22.419 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:22.419 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.419 [2024-11-18 11:51:48.222347] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:22.419 [2024-11-18 11:51:48.222496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2998681 ] 00:23:22.677 [2024-11-18 11:51:48.357413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.677 [2024-11-18 11:51:48.478099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:23.609 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:23.609 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:23.609 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jhs9fwUc5R 00:23:23.868 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:24.127 [2024-11-18 11:51:49.850334] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:24.127 [2024-11-18 11:51:49.860419] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:24.127 [2024-11-18 11:51:49.861224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:23:24.127 [2024-11-18 11:51:49.862204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:23:24.127 [2024-11-18 11:51:49.863195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:24.127 [2024-11-18 11:51:49.863242] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:24.127 [2024-11-18 11:51:49.863264] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:24.127 [2024-11-18 11:51:49.863294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:24.127 request: 00:23:24.127 { 00:23:24.127 "name": "TLSTEST", 00:23:24.127 "trtype": "tcp", 00:23:24.127 "traddr": "10.0.0.2", 00:23:24.127 "adrfam": "ipv4", 00:23:24.127 "trsvcid": "4420", 00:23:24.127 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.127 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:24.127 "prchk_reftag": false, 00:23:24.127 "prchk_guard": false, 00:23:24.127 "hdgst": false, 00:23:24.127 "ddgst": false, 00:23:24.127 "psk": "key0", 00:23:24.127 "allow_unrecognized_csi": false, 00:23:24.127 "method": "bdev_nvme_attach_controller", 00:23:24.127 "req_id": 1 00:23:24.127 } 00:23:24.127 Got JSON-RPC error response 00:23:24.127 response: 00:23:24.127 { 00:23:24.127 "code": -5, 00:23:24.127 "message": "Input/output error" 00:23:24.127 } 00:23:24.127 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2998681 00:23:24.127 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2998681 ']' 00:23:24.127 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2998681 00:23:24.127 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:24.127 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:24.127 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2998681 00:23:24.127 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:24.127 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:24.127 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2998681' 00:23:24.127 killing process with pid 2998681 00:23:24.127 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2998681 00:23:24.127 Received shutdown signal, test time was about 10.000000 seconds 00:23:24.127 00:23:24.127 Latency(us) 00:23:24.127 [2024-11-18T10:51:50.012Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.127 [2024-11-18T10:51:50.012Z] =================================================================================================================== 00:23:24.127 [2024-11-18T10:51:50.012Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:24.127 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2998681 00:23:25.064 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:25.064 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:25.064 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:25.064 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:25.064 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:25.064 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3XSUKJSDjw 00:23:25.064 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:25.064 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3XSUKJSDjw 00:23:25.064 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:25.064 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:25.064 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:25.064 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:25.064 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3XSUKJSDjw 00:23:25.064 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:25.064 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:25.064 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:25.064 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.3XSUKJSDjw 00:23:25.064 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:25.064 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2998955 00:23:25.064 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:25.064 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:25.064 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2998955 /var/tmp/bdevperf.sock 00:23:25.064 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2998955 ']' 00:23:25.064 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:25.064 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:25.064 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:25.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:25.064 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:25.064 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.064 [2024-11-18 11:51:50.825026] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:25.064 [2024-11-18 11:51:50.825155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2998955 ] 00:23:25.324 [2024-11-18 11:51:50.963202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.324 [2024-11-18 11:51:51.083993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:26.263 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:26.263 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:26.263 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3XSUKJSDjw 00:23:26.550 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:26.550 [2024-11-18 11:51:52.418461] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:26.833 [2024-11-18 11:51:52.428484] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:26.833 [2024-11-18 11:51:52.428545] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:26.833 [2024-11-18 11:51:52.428639] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:26.833 [2024-11-18 11:51:52.428690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:23:26.833 [2024-11-18 11:51:52.429657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:23:26.833 [2024-11-18 11:51:52.430660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:26.833 [2024-11-18 11:51:52.430695] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:26.833 [2024-11-18 11:51:52.430721] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:26.833 [2024-11-18 11:51:52.430749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:26.833 request: 00:23:26.833 { 00:23:26.833 "name": "TLSTEST", 00:23:26.833 "trtype": "tcp", 00:23:26.833 "traddr": "10.0.0.2", 00:23:26.833 "adrfam": "ipv4", 00:23:26.833 "trsvcid": "4420", 00:23:26.833 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.833 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:26.833 "prchk_reftag": false, 00:23:26.833 "prchk_guard": false, 00:23:26.833 "hdgst": false, 00:23:26.833 "ddgst": false, 00:23:26.834 "psk": "key0", 00:23:26.834 "allow_unrecognized_csi": false, 00:23:26.834 "method": "bdev_nvme_attach_controller", 00:23:26.834 "req_id": 1 00:23:26.834 } 00:23:26.834 Got JSON-RPC error response 00:23:26.834 response: 00:23:26.834 { 00:23:26.834 "code": -5, 00:23:26.834 "message": "Input/output error" 00:23:26.834 } 00:23:26.834 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2998955 00:23:26.834 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2998955 ']' 00:23:26.834 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2998955 00:23:26.834 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:26.834 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:26.834 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2998955 00:23:26.834 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:26.834 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:26.834 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2998955' 00:23:26.834 killing process with pid 2998955 00:23:26.834 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2998955 00:23:26.834 Received shutdown signal, test time was about 10.000000 seconds 00:23:26.834 00:23:26.834 Latency(us) 00:23:26.834 [2024-11-18T10:51:52.719Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.834 [2024-11-18T10:51:52.719Z] =================================================================================================================== 00:23:26.834 [2024-11-18T10:51:52.719Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:26.834 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2998955 00:23:27.403 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:27.403 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:27.403 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:27.403 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:27.403 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:27.403 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3XSUKJSDjw 00:23:27.403 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:27.403 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3XSUKJSDjw 00:23:27.403 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:27.403 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:27.403 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:27.403 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:27.403 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3XSUKJSDjw 00:23:27.403 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:27.403 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:27.403 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:27.403 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.3XSUKJSDjw 00:23:27.403 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:27.403 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2999246 00:23:27.403 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:27.403 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:27.403 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2999246 /var/tmp/bdevperf.sock 00:23:27.403 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2999246 ']' 00:23:27.403 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:27.403 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:27.664 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:27.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:27.664 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:27.664 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.664 [2024-11-18 11:51:53.380937] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:27.664 [2024-11-18 11:51:53.381090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2999246 ] 00:23:27.664 [2024-11-18 11:51:53.533664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.924 [2024-11-18 11:51:53.665465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.862 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:28.862 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:28.862 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3XSUKJSDjw 00:23:28.862 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:29.122 [2024-11-18 11:51:54.985068] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:29.122 [2024-11-18 11:51:54.994833] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:29.122 [2024-11-18 11:51:54.994877] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:29.122 [2024-11-18 11:51:54.994966] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:29.122 [2024-11-18 11:51:54.995960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:23:29.122 [2024-11-18 11:51:54.996937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:23:29.122 [2024-11-18 11:51:54.997929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:23:29.122 [2024-11-18 11:51:54.997976] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:29.122 [2024-11-18 11:51:54.998001] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:29.122 [2024-11-18 11:51:54.998032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:23:29.122 request: 00:23:29.122 { 00:23:29.122 "name": "TLSTEST", 00:23:29.122 "trtype": "tcp", 00:23:29.122 "traddr": "10.0.0.2", 00:23:29.122 "adrfam": "ipv4", 00:23:29.122 "trsvcid": "4420", 00:23:29.122 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:29.122 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:29.122 "prchk_reftag": false, 00:23:29.122 "prchk_guard": false, 00:23:29.122 "hdgst": false, 00:23:29.122 "ddgst": false, 00:23:29.122 "psk": "key0", 00:23:29.122 "allow_unrecognized_csi": false, 00:23:29.122 "method": "bdev_nvme_attach_controller", 00:23:29.122 "req_id": 1 00:23:29.122 } 00:23:29.122 Got JSON-RPC error response 00:23:29.122 response: 00:23:29.122 { 00:23:29.123 "code": -5, 00:23:29.123 "message": "Input/output error" 00:23:29.123 } 00:23:29.383 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2999246 00:23:29.383 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2999246 ']' 00:23:29.383 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2999246 00:23:29.383 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:29.383 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:29.383 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2999246 00:23:29.383 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:29.383 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:29.383 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2999246' 00:23:29.383 killing process with pid 2999246 00:23:29.383 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2999246 00:23:29.383 Received shutdown signal, test time was about 10.000000 seconds 00:23:29.383 00:23:29.383 Latency(us) 00:23:29.383 [2024-11-18T10:51:55.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.383 [2024-11-18T10:51:55.268Z] =================================================================================================================== 00:23:29.383 [2024-11-18T10:51:55.268Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:29.383 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2999246 00:23:30.326 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:30.326 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:30.326 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:30.326 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:30.326 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:30.326 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:30.326 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:30.326 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:30.326 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:30.326 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:30.326 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:30.326 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:30.326 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:30.326 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:30.326 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:30.326 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:30.326 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:30.326 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:30.326 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2999636 00:23:30.326 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:30.326 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:30.326 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2999636 /var/tmp/bdevperf.sock 00:23:30.326 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2999636 ']' 00:23:30.326 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:30.326 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:30.326 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:30.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:30.326 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:30.326 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.326 [2024-11-18 11:51:55.972090] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:30.326 [2024-11-18 11:51:55.972221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2999636 ] 00:23:30.326 [2024-11-18 11:51:56.103724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.587 [2024-11-18 11:51:56.224529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:31.154 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:31.154 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:31.154 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:31.413 [2024-11-18 11:51:57.191636] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:31.413 [2024-11-18 11:51:57.191689] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:31.413 request: 00:23:31.413 { 00:23:31.413 "name": "key0", 00:23:31.413 "path": "", 00:23:31.413 "method": "keyring_file_add_key", 00:23:31.413 "req_id": 1 00:23:31.413 } 00:23:31.413 Got JSON-RPC error response 00:23:31.413 response: 00:23:31.413 { 00:23:31.413 "code": -1, 00:23:31.413 "message": "Operation not permitted" 00:23:31.413 } 00:23:31.413 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:31.672 [2024-11-18 11:51:57.456469] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:31.672 [2024-11-18 11:51:57.456553] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:31.672 request: 00:23:31.672 { 00:23:31.672 "name": "TLSTEST", 00:23:31.672 "trtype": "tcp", 00:23:31.672 "traddr": "10.0.0.2", 00:23:31.672 "adrfam": "ipv4", 00:23:31.672 "trsvcid": "4420", 00:23:31.672 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.672 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:31.672 "prchk_reftag": false, 00:23:31.672 "prchk_guard": false, 00:23:31.672 "hdgst": false, 00:23:31.672 "ddgst": false, 00:23:31.672 "psk": "key0", 00:23:31.672 "allow_unrecognized_csi": false, 00:23:31.672 "method": "bdev_nvme_attach_controller", 00:23:31.672 "req_id": 1 00:23:31.672 } 00:23:31.672 Got JSON-RPC error response 00:23:31.672 response: 00:23:31.672 { 00:23:31.672 "code": -126, 00:23:31.672 "message": "Required key not available" 00:23:31.672 } 00:23:31.672 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2999636 00:23:31.672 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2999636 ']' 00:23:31.672 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2999636 00:23:31.672 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:31.672 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:31.672 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2999636 00:23:31.672 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:31.672 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:31.672 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2999636' 00:23:31.672 killing process with pid 2999636 00:23:31.672 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2999636 00:23:31.672 Received shutdown signal, test time was about 10.000000 seconds 00:23:31.672 00:23:31.672 Latency(us) 00:23:31.672 [2024-11-18T10:51:57.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.672 [2024-11-18T10:51:57.557Z] =================================================================================================================== 00:23:31.672 [2024-11-18T10:51:57.557Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:31.672 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2999636 00:23:32.610 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:32.610 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:32.610 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:32.610 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:32.610 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:32.610 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2995142 00:23:32.610 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2995142 ']' 00:23:32.610 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2995142 00:23:32.610 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:32.610 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.610 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2995142 00:23:32.610 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:32.610 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:32.610 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2995142' 00:23:32.610 killing process with pid 2995142 00:23:32.610 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2995142 00:23:32.610 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2995142 00:23:33.988 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:33.988 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:33.988 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:33.988 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:33.988 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:33.988 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:23:33.988 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:33.988 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:33.988 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:33.988 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.PXOrMVQVpQ 00:23:33.988 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:33.988 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.PXOrMVQVpQ 00:23:33.988 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:33.988 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:33.988 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:33.988 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.988 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3000055 00:23:33.988 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:33.988 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3000055 00:23:33.988 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3000055 ']' 00:23:33.988 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.988 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:33.988 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.988 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:33.988 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.988 [2024-11-18 11:51:59.712684] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:33.988 [2024-11-18 11:51:59.712822] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.988 [2024-11-18 11:51:59.865201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.247 [2024-11-18 11:52:00.002575] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:34.247 [2024-11-18 11:52:00.002664] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:34.247 [2024-11-18 11:52:00.002689] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:34.247 [2024-11-18 11:52:00.002714] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:34.247 [2024-11-18 11:52:00.002733] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:34.247 [2024-11-18 11:52:00.004259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.185 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:35.185 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:35.185 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:35.185 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:35.185 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.185 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.185 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.PXOrMVQVpQ 00:23:35.185 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.PXOrMVQVpQ 00:23:35.185 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:35.185 [2024-11-18 11:52:01.040266] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.185 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:35.752 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:35.752 [2024-11-18 11:52:01.617866] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:35.752 [2024-11-18 11:52:01.618235] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:35.752 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:36.318 malloc0 00:23:36.318 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:36.635 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.PXOrMVQVpQ 00:23:36.893 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:37.151 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PXOrMVQVpQ 00:23:37.151 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:37.151 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:37.151 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:37.151 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.PXOrMVQVpQ 00:23:37.151 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:37.151 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3000472 00:23:37.151 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:37.151 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:37.151 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3000472 /var/tmp/bdevperf.sock 00:23:37.151 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3000472 ']' 00:23:37.151 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:37.151 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:37.151 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:37.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:37.151 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:37.151 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.151 [2024-11-18 11:52:02.969225] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:37.151 [2024-11-18 11:52:02.969363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3000472 ] 00:23:37.408 [2024-11-18 11:52:03.105311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.409 [2024-11-18 11:52:03.227879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:38.344 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:38.344 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:38.344 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PXOrMVQVpQ 00:23:38.602 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:38.602 [2024-11-18 11:52:04.482414] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:38.861 TLSTESTn1 00:23:38.861 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:38.861 Running I/O for 10 seconds... 00:23:40.812 2563.00 IOPS, 10.01 MiB/s [2024-11-18T10:52:08.076Z] 2612.50 IOPS, 10.21 MiB/s [2024-11-18T10:52:09.015Z] 2615.33 IOPS, 10.22 MiB/s [2024-11-18T10:52:09.954Z] 2626.00 IOPS, 10.26 MiB/s [2024-11-18T10:52:10.892Z] 2626.60 IOPS, 10.26 MiB/s [2024-11-18T10:52:11.830Z] 2626.00 IOPS, 10.26 MiB/s [2024-11-18T10:52:12.766Z] 2627.29 IOPS, 10.26 MiB/s [2024-11-18T10:52:14.149Z] 2631.50 IOPS, 10.28 MiB/s [2024-11-18T10:52:14.719Z] 2633.33 IOPS, 10.29 MiB/s [2024-11-18T10:52:14.979Z] 2635.70 IOPS, 10.30 MiB/s 00:23:49.094 Latency(us) 00:23:49.094 [2024-11-18T10:52:14.979Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.094 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:49.094 Verification LBA range: start 0x0 length 0x2000 00:23:49.094 TLSTESTn1 : 10.03 2640.95 10.32 0.00 0.00 48377.39 9077.95 38059.43 00:23:49.094 [2024-11-18T10:52:14.979Z] =================================================================================================================== 00:23:49.094 [2024-11-18T10:52:14.979Z] Total : 2640.95 10.32 0.00 0.00 48377.39 9077.95 38059.43 00:23:49.094 { 00:23:49.094 "results": [ 00:23:49.094 { 00:23:49.094 "job": "TLSTESTn1", 00:23:49.094 "core_mask": "0x4", 00:23:49.094 "workload": "verify", 00:23:49.094 "status": "finished", 00:23:49.094 "verify_range": { 00:23:49.094 "start": 0, 00:23:49.094 "length": 8192 00:23:49.094 }, 00:23:49.094 "queue_depth": 128, 00:23:49.094 "io_size": 4096, 00:23:49.094 "runtime": 10.028591, 00:23:49.094 "iops": 2640.9492619651155, 00:23:49.094 "mibps": 10.316208054551232, 00:23:49.094 "io_failed": 0, 00:23:49.094 "io_timeout": 0, 00:23:49.094 "avg_latency_us": 48377.39132327872, 00:23:49.094 "min_latency_us": 9077.94962962963, 00:23:49.094 "max_latency_us": 38059.42518518519 00:23:49.094 } 00:23:49.094 ], 00:23:49.094 "core_count": 1 00:23:49.094 } 00:23:49.095 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:49.095 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3000472 00:23:49.095 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3000472 ']' 00:23:49.095 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3000472 00:23:49.095 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:49.095 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:49.095 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3000472 00:23:49.095 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:49.095 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:49.095 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3000472' 00:23:49.095 killing process with pid 3000472 00:23:49.095 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3000472 00:23:49.095 Received shutdown signal, test time was about 10.000000 seconds 00:23:49.095 00:23:49.095 Latency(us) 00:23:49.095 [2024-11-18T10:52:14.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.095 [2024-11-18T10:52:14.980Z] =================================================================================================================== 00:23:49.095 [2024-11-18T10:52:14.980Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:49.095 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3000472 00:23:50.035 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.PXOrMVQVpQ 00:23:50.035 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PXOrMVQVpQ 00:23:50.035 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:50.035 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PXOrMVQVpQ 00:23:50.035 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:50.035 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:50.035 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:50.035 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:50.035 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PXOrMVQVpQ 00:23:50.035 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:50.035 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:50.035 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:50.035 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.PXOrMVQVpQ 00:23:50.035 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:50.035 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3001946 00:23:50.035 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:50.035 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:50.035 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3001946 /var/tmp/bdevperf.sock 00:23:50.035 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3001946 ']' 00:23:50.035 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:50.035 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:50.035 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:50.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:50.035 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:50.035 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.035 [2024-11-18 11:52:15.720386] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:50.035 [2024-11-18 11:52:15.720539] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3001946 ] 00:23:50.035 [2024-11-18 11:52:15.855425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.294 [2024-11-18 11:52:15.982160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:50.860 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:50.860 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:50.860 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PXOrMVQVpQ 00:23:51.426 [2024-11-18 11:52:17.031270] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.PXOrMVQVpQ': 0100666 00:23:51.426 [2024-11-18 11:52:17.031337] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:51.426 request: 00:23:51.426 { 00:23:51.426 "name": "key0", 00:23:51.426 "path": "/tmp/tmp.PXOrMVQVpQ", 00:23:51.426 "method": "keyring_file_add_key", 00:23:51.426 "req_id": 1 00:23:51.426 } 00:23:51.426 Got JSON-RPC error response 00:23:51.426 response: 00:23:51.426 { 00:23:51.426 "code": -1, 00:23:51.426 "message": "Operation not permitted" 00:23:51.426 } 00:23:51.426 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:51.684 [2024-11-18 11:52:17.320184] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:51.684 [2024-11-18 11:52:17.320269] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:51.684 request: 00:23:51.684 { 00:23:51.684 "name": "TLSTEST", 00:23:51.684 "trtype": "tcp", 00:23:51.684 "traddr": "10.0.0.2", 00:23:51.684 "adrfam": "ipv4", 00:23:51.684 "trsvcid": "4420", 00:23:51.684 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.684 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:51.684 "prchk_reftag": false, 00:23:51.684 "prchk_guard": false, 00:23:51.684 "hdgst": false, 00:23:51.684 "ddgst": false, 00:23:51.684 "psk": "key0", 00:23:51.684 "allow_unrecognized_csi": false, 00:23:51.684 "method": "bdev_nvme_attach_controller", 00:23:51.684 "req_id": 1 00:23:51.684 } 00:23:51.684 Got JSON-RPC error response 00:23:51.684 response: 00:23:51.684 { 00:23:51.684 "code": -126, 00:23:51.684 "message": "Required key not available" 00:23:51.684 } 00:23:51.684 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3001946 00:23:51.684 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3001946 ']' 00:23:51.684 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3001946 00:23:51.684 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:51.684 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:51.684 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3001946 00:23:51.684 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:51.684 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:51.684 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3001946' 00:23:51.684 killing process with pid 3001946 00:23:51.684 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3001946 00:23:51.684 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3001946 00:23:51.684 Received shutdown signal, test time was about 10.000000 seconds 00:23:51.684 00:23:51.684 Latency(us) 00:23:51.684 [2024-11-18T10:52:17.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.684 [2024-11-18T10:52:17.569Z] =================================================================================================================== 00:23:51.684 [2024-11-18T10:52:17.569Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:52.621 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:52.621 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:52.621 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:52.621 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:52.621 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:52.621 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3000055 00:23:52.621 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3000055 ']' 00:23:52.621 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3000055 00:23:52.621 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:52.621 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:52.621 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3000055 00:23:52.621 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:52.621 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:52.621 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3000055' 00:23:52.621 killing process with pid 3000055 00:23:52.621 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3000055 00:23:52.621 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3000055 00:23:53.599 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:53.599 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:53.599 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:53.599 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.599 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3002363 00:23:53.599 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:53.599 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3002363 00:23:53.599 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3002363 ']' 00:23:53.599 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.599 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:53.599 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.599 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:53.599 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.856 [2024-11-18 11:52:19.544796] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:53.856 [2024-11-18 11:52:19.544941] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.856 [2024-11-18 11:52:19.689067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.115 [2024-11-18 11:52:19.817362] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:54.115 [2024-11-18 11:52:19.817460] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:54.115 [2024-11-18 11:52:19.817486] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:54.115 [2024-11-18 11:52:19.817523] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:54.115 [2024-11-18 11:52:19.817553] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:54.115 [2024-11-18 11:52:19.819206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.682 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:54.682 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:54.682 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:54.682 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:54.682 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.940 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:54.940 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.PXOrMVQVpQ 00:23:54.940 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:54.940 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.PXOrMVQVpQ 00:23:54.940 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:23:54.940 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:54.940 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:23:54.941 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:54.941 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.PXOrMVQVpQ 00:23:54.941 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.PXOrMVQVpQ 00:23:54.941 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:55.199 [2024-11-18 11:52:20.870269] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:55.199 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:55.457 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:55.715 [2024-11-18 11:52:21.463936] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:55.715 [2024-11-18 11:52:21.464295] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.715 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:55.972 malloc0 00:23:55.972 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:56.542 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.PXOrMVQVpQ 00:23:56.542 [2024-11-18 11:52:22.380523] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.PXOrMVQVpQ': 0100666 00:23:56.542 [2024-11-18 11:52:22.380611] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:56.542 request: 00:23:56.542 { 00:23:56.542 "name": "key0", 00:23:56.542 "path": "/tmp/tmp.PXOrMVQVpQ", 00:23:56.542 "method": "keyring_file_add_key", 00:23:56.542 "req_id": 1 00:23:56.542 } 00:23:56.542 Got JSON-RPC error response 00:23:56.542 response: 00:23:56.542 { 00:23:56.542 "code": -1, 00:23:56.542 "message": "Operation not permitted" 00:23:56.542 } 00:23:56.542 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:56.801 [2024-11-18 11:52:22.653310] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:56.801 [2024-11-18 11:52:22.653421] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:56.801 request: 00:23:56.801 { 00:23:56.801 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.801 "host": "nqn.2016-06.io.spdk:host1", 00:23:56.801 "psk": "key0", 00:23:56.801 "method": "nvmf_subsystem_add_host", 00:23:56.801 "req_id": 1 00:23:56.801 } 00:23:56.801 Got JSON-RPC error response 00:23:56.801 response: 00:23:56.801 { 00:23:56.801 "code": -32603, 00:23:56.801 "message": "Internal error" 00:23:56.801 } 00:23:56.801 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:56.801 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:56.801 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:56.801 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:56.801 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3002363 00:23:56.801 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3002363 ']' 00:23:56.801 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3002363 00:23:56.801 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:56.801 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:56.801 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3002363 00:23:57.061 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:57.061 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:57.061 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3002363' 00:23:57.061 killing process with pid 3002363 00:23:57.061 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3002363 00:23:57.061 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3002363 00:23:57.998 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.PXOrMVQVpQ 00:23:57.998 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:57.998 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:57.998 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:57.998 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.256 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3002921 00:23:58.256 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:58.256 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3002921 00:23:58.256 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3002921 ']' 00:23:58.256 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.256 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:58.256 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.256 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:58.256 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.256 [2024-11-18 11:52:23.988197] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:58.256 [2024-11-18 11:52:23.988352] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:58.515 [2024-11-18 11:52:24.156045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.515 [2024-11-18 11:52:24.293938] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:58.515 [2024-11-18 11:52:24.294025] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:58.515 [2024-11-18 11:52:24.294050] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:58.515 [2024-11-18 11:52:24.294075] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:58.515 [2024-11-18 11:52:24.294094] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:58.515 [2024-11-18 11:52:24.295757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.448 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:59.448 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:59.448 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:59.448 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:59.448 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.448 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:59.448 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.PXOrMVQVpQ 00:23:59.448 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.PXOrMVQVpQ 00:23:59.448 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:59.448 [2024-11-18 11:52:25.242814] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:59.448 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:59.705 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:59.963 [2024-11-18 11:52:25.784329] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:59.963 [2024-11-18 11:52:25.784690] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:59.963 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:00.221 malloc0 00:24:00.221 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:00.788 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.PXOrMVQVpQ 00:24:00.788 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:01.046 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3003338 00:24:01.046 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:01.046 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:01.046 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3003338 /var/tmp/bdevperf.sock 00:24:01.046 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3003338 ']' 00:24:01.046 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:01.046 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:01.046 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:01.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:01.046 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:01.046 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.305 [2024-11-18 11:52:26.998655] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:01.305 [2024-11-18 11:52:26.998785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3003338 ] 00:24:01.305 [2024-11-18 11:52:27.133780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.564 [2024-11-18 11:52:27.252749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:02.130 11:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:02.130 11:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:02.130 11:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PXOrMVQVpQ 00:24:02.387 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:02.644 [2024-11-18 11:52:28.468565] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:02.902 TLSTESTn1 00:24:02.902 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:03.159 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:24:03.159 "subsystems": [ 00:24:03.159 { 00:24:03.159 "subsystem": "keyring", 00:24:03.159 "config": [ 00:24:03.159 { 00:24:03.159 "method": "keyring_file_add_key", 00:24:03.159 "params": { 00:24:03.160 "name": "key0", 00:24:03.160 "path": "/tmp/tmp.PXOrMVQVpQ" 00:24:03.160 } 00:24:03.160 } 00:24:03.160 ] 00:24:03.160 }, 00:24:03.160 { 00:24:03.160 "subsystem": "iobuf", 00:24:03.160 "config": [ 00:24:03.160 { 00:24:03.160 "method": "iobuf_set_options", 00:24:03.160 "params": { 00:24:03.160 "small_pool_count": 8192, 00:24:03.160 "large_pool_count": 1024, 00:24:03.160 "small_bufsize": 8192, 00:24:03.160 "large_bufsize": 135168, 00:24:03.160 "enable_numa": false 00:24:03.160 } 00:24:03.160 } 00:24:03.160 ] 00:24:03.160 }, 00:24:03.160 { 00:24:03.160 "subsystem": "sock", 00:24:03.160 "config": [ 00:24:03.160 { 00:24:03.160 "method": "sock_set_default_impl", 00:24:03.160 "params": { 00:24:03.160 "impl_name": "posix" 00:24:03.160 } 00:24:03.160 }, 00:24:03.160 { 00:24:03.160 "method": "sock_impl_set_options", 00:24:03.160 "params": { 00:24:03.160 "impl_name": "ssl", 00:24:03.160 "recv_buf_size": 4096, 00:24:03.160 "send_buf_size": 4096, 00:24:03.160 "enable_recv_pipe": true, 00:24:03.160 "enable_quickack": false, 00:24:03.160 "enable_placement_id": 0, 00:24:03.160 "enable_zerocopy_send_server": true, 00:24:03.160 "enable_zerocopy_send_client": false, 00:24:03.160 "zerocopy_threshold": 0, 00:24:03.160 "tls_version": 0, 00:24:03.160 "enable_ktls": false 00:24:03.160 } 00:24:03.160 }, 00:24:03.160 { 00:24:03.160 "method": "sock_impl_set_options", 00:24:03.160 "params": { 00:24:03.160 "impl_name": "posix", 00:24:03.160 "recv_buf_size": 2097152, 00:24:03.160 "send_buf_size": 2097152, 00:24:03.160 "enable_recv_pipe": true, 00:24:03.160 "enable_quickack": false, 00:24:03.160 "enable_placement_id": 0, 00:24:03.160 "enable_zerocopy_send_server": true, 00:24:03.160 "enable_zerocopy_send_client": false, 00:24:03.160 "zerocopy_threshold": 0, 00:24:03.160 "tls_version": 0, 00:24:03.160 "enable_ktls": false 00:24:03.160 } 00:24:03.160 } 00:24:03.160 ] 00:24:03.160 }, 00:24:03.160 { 00:24:03.160 "subsystem": "vmd", 00:24:03.160 "config": [] 00:24:03.160 }, 00:24:03.160 { 00:24:03.160 "subsystem": "accel", 00:24:03.160 "config": [ 00:24:03.160 { 00:24:03.160 "method": "accel_set_options", 00:24:03.160 "params": { 00:24:03.160 "small_cache_size": 128, 00:24:03.160 "large_cache_size": 16, 00:24:03.160 "task_count": 2048, 00:24:03.160 "sequence_count": 2048, 00:24:03.160 "buf_count": 2048 00:24:03.160 } 00:24:03.160 } 00:24:03.160 ] 00:24:03.160 }, 00:24:03.160 { 00:24:03.160 "subsystem": "bdev", 00:24:03.160 "config": [ 00:24:03.160 { 00:24:03.160 "method": "bdev_set_options", 00:24:03.160 "params": { 00:24:03.160 "bdev_io_pool_size": 65535, 00:24:03.160 "bdev_io_cache_size": 256, 00:24:03.160 "bdev_auto_examine": true, 00:24:03.160 "iobuf_small_cache_size": 128, 00:24:03.160 "iobuf_large_cache_size": 16 00:24:03.160 } 00:24:03.160 }, 00:24:03.160 { 00:24:03.160 "method": "bdev_raid_set_options", 00:24:03.160 "params": { 00:24:03.160 "process_window_size_kb": 1024, 00:24:03.160 "process_max_bandwidth_mb_sec": 0 00:24:03.160 } 00:24:03.160 }, 00:24:03.160 { 00:24:03.160 "method": "bdev_iscsi_set_options", 00:24:03.160 "params": { 00:24:03.160 "timeout_sec": 30 00:24:03.160 } 00:24:03.160 }, 00:24:03.160 { 00:24:03.160 "method": "bdev_nvme_set_options", 00:24:03.160 "params": { 00:24:03.160 "action_on_timeout": "none", 00:24:03.160 "timeout_us": 0, 00:24:03.160 "timeout_admin_us": 0, 00:24:03.160 "keep_alive_timeout_ms": 10000, 00:24:03.160 "arbitration_burst": 0, 00:24:03.160 "low_priority_weight": 0, 00:24:03.160 "medium_priority_weight": 0, 00:24:03.160 "high_priority_weight": 0, 00:24:03.160 "nvme_adminq_poll_period_us": 10000, 00:24:03.160 "nvme_ioq_poll_period_us": 0, 00:24:03.160 "io_queue_requests": 0, 00:24:03.160 "delay_cmd_submit": true, 00:24:03.160 "transport_retry_count": 4, 00:24:03.160 "bdev_retry_count": 3, 00:24:03.160 "transport_ack_timeout": 0, 00:24:03.160 "ctrlr_loss_timeout_sec": 0, 00:24:03.160 "reconnect_delay_sec": 0, 00:24:03.160 "fast_io_fail_timeout_sec": 0, 00:24:03.160 "disable_auto_failback": false, 00:24:03.160 "generate_uuids": false, 00:24:03.160 "transport_tos": 0, 00:24:03.160 "nvme_error_stat": false, 00:24:03.160 "rdma_srq_size": 0, 00:24:03.160 "io_path_stat": false, 00:24:03.160 "allow_accel_sequence": false, 00:24:03.160 "rdma_max_cq_size": 0, 00:24:03.160 "rdma_cm_event_timeout_ms": 0, 00:24:03.160 "dhchap_digests": [ 00:24:03.160 "sha256", 00:24:03.160 "sha384", 00:24:03.160 "sha512" 00:24:03.160 ], 00:24:03.160 "dhchap_dhgroups": [ 00:24:03.160 "null", 00:24:03.160 "ffdhe2048", 00:24:03.160 "ffdhe3072", 00:24:03.160 "ffdhe4096", 00:24:03.160 "ffdhe6144", 00:24:03.160 "ffdhe8192" 00:24:03.160 ] 00:24:03.160 } 00:24:03.160 }, 00:24:03.160 { 00:24:03.160 "method": "bdev_nvme_set_hotplug", 00:24:03.160 "params": { 00:24:03.160 "period_us": 100000, 00:24:03.160 "enable": false 00:24:03.160 } 00:24:03.160 }, 00:24:03.160 { 00:24:03.160 "method": "bdev_malloc_create", 00:24:03.160 "params": { 00:24:03.160 "name": "malloc0", 00:24:03.160 "num_blocks": 8192, 00:24:03.160 "block_size": 4096, 00:24:03.160 "physical_block_size": 4096, 00:24:03.160 "uuid": "bb9ccc98-f0dc-47d4-81fe-862f7ee8e3be", 00:24:03.160 "optimal_io_boundary": 0, 00:24:03.160 "md_size": 0, 00:24:03.160 "dif_type": 0, 00:24:03.160 "dif_is_head_of_md": false, 00:24:03.160 "dif_pi_format": 0 00:24:03.160 } 00:24:03.160 }, 00:24:03.160 { 00:24:03.160 "method": "bdev_wait_for_examine" 00:24:03.160 } 00:24:03.160 ] 00:24:03.160 }, 00:24:03.160 { 00:24:03.160 "subsystem": "nbd", 00:24:03.160 "config": [] 00:24:03.160 }, 00:24:03.160 { 00:24:03.160 "subsystem": "scheduler", 00:24:03.160 "config": [ 00:24:03.160 { 00:24:03.160 "method": "framework_set_scheduler", 00:24:03.160 "params": { 00:24:03.160 "name": "static" 00:24:03.160 } 00:24:03.160 } 00:24:03.160 ] 00:24:03.160 }, 00:24:03.160 { 00:24:03.160 "subsystem": "nvmf", 00:24:03.160 "config": [ 00:24:03.160 { 00:24:03.160 "method": "nvmf_set_config", 00:24:03.160 "params": { 00:24:03.160 "discovery_filter": "match_any", 00:24:03.160 "admin_cmd_passthru": { 00:24:03.160 "identify_ctrlr": false 00:24:03.160 }, 00:24:03.160 "dhchap_digests": [ 00:24:03.160 "sha256", 00:24:03.160 "sha384", 00:24:03.160 "sha512" 00:24:03.160 ], 00:24:03.160 "dhchap_dhgroups": [ 00:24:03.160 "null", 00:24:03.160 "ffdhe2048", 00:24:03.160 "ffdhe3072", 00:24:03.160 "ffdhe4096", 00:24:03.160 "ffdhe6144", 00:24:03.160 "ffdhe8192" 00:24:03.160 ] 00:24:03.160 } 00:24:03.160 }, 00:24:03.160 { 00:24:03.160 "method": "nvmf_set_max_subsystems", 00:24:03.160 "params": { 00:24:03.160 "max_subsystems": 1024 00:24:03.160 } 00:24:03.160 }, 00:24:03.160 { 00:24:03.160 "method": "nvmf_set_crdt", 00:24:03.160 "params": { 00:24:03.160 "crdt1": 0, 00:24:03.160 "crdt2": 0, 00:24:03.160 "crdt3": 0 00:24:03.160 } 00:24:03.160 }, 00:24:03.160 { 00:24:03.160 "method": "nvmf_create_transport", 00:24:03.161 "params": { 00:24:03.161 "trtype": "TCP", 00:24:03.161 "max_queue_depth": 128, 00:24:03.161 "max_io_qpairs_per_ctrlr": 127, 00:24:03.161 "in_capsule_data_size": 4096, 00:24:03.161 "max_io_size": 131072, 00:24:03.161 "io_unit_size": 131072, 00:24:03.161 "max_aq_depth": 128, 00:24:03.161 "num_shared_buffers": 511, 00:24:03.161 "buf_cache_size": 4294967295, 00:24:03.161 "dif_insert_or_strip": false, 00:24:03.161 "zcopy": false, 00:24:03.161 "c2h_success": false, 00:24:03.161 "sock_priority": 0, 00:24:03.161 "abort_timeout_sec": 1, 00:24:03.161 "ack_timeout": 0, 00:24:03.161 "data_wr_pool_size": 0 00:24:03.161 } 00:24:03.161 }, 00:24:03.161 { 00:24:03.161 "method": "nvmf_create_subsystem", 00:24:03.161 "params": { 00:24:03.161 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.161 "allow_any_host": false, 00:24:03.161 "serial_number": "SPDK00000000000001", 00:24:03.161 "model_number": "SPDK bdev Controller", 00:24:03.161 "max_namespaces": 10, 00:24:03.161 "min_cntlid": 1, 00:24:03.161 "max_cntlid": 65519, 00:24:03.161 "ana_reporting": false 00:24:03.161 } 00:24:03.161 }, 00:24:03.161 { 00:24:03.161 "method": "nvmf_subsystem_add_host", 00:24:03.161 "params": { 00:24:03.161 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.161 "host": "nqn.2016-06.io.spdk:host1", 00:24:03.161 "psk": "key0" 00:24:03.161 } 00:24:03.161 }, 00:24:03.161 { 00:24:03.161 "method": "nvmf_subsystem_add_ns", 00:24:03.161 "params": { 00:24:03.161 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.161 "namespace": { 00:24:03.161 "nsid": 1, 00:24:03.161 "bdev_name": "malloc0", 00:24:03.161 "nguid": "BB9CCC98F0DC47D481FE862F7EE8E3BE", 00:24:03.161 "uuid": "bb9ccc98-f0dc-47d4-81fe-862f7ee8e3be", 00:24:03.161 "no_auto_visible": false 00:24:03.161 } 00:24:03.161 } 00:24:03.161 }, 00:24:03.161 { 00:24:03.161 "method": "nvmf_subsystem_add_listener", 00:24:03.161 "params": { 00:24:03.161 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.161 "listen_address": { 00:24:03.161 "trtype": "TCP", 00:24:03.161 "adrfam": "IPv4", 00:24:03.161 "traddr": "10.0.0.2", 00:24:03.161 "trsvcid": "4420" 00:24:03.161 }, 00:24:03.161 "secure_channel": true 00:24:03.161 } 00:24:03.161 } 00:24:03.161 ] 00:24:03.161 } 00:24:03.161 ] 00:24:03.161 }' 00:24:03.161 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:03.419 11:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:24:03.419 "subsystems": [ 00:24:03.419 { 00:24:03.419 "subsystem": "keyring", 00:24:03.419 "config": [ 00:24:03.419 { 00:24:03.419 "method": "keyring_file_add_key", 00:24:03.419 "params": { 00:24:03.419 "name": "key0", 00:24:03.419 "path": "/tmp/tmp.PXOrMVQVpQ" 00:24:03.419 } 00:24:03.419 } 00:24:03.419 ] 00:24:03.419 }, 00:24:03.419 { 00:24:03.419 "subsystem": "iobuf", 00:24:03.419 "config": [ 00:24:03.419 { 00:24:03.419 "method": "iobuf_set_options", 00:24:03.420 "params": { 00:24:03.420 "small_pool_count": 8192, 00:24:03.420 "large_pool_count": 1024, 00:24:03.420 "small_bufsize": 8192, 00:24:03.420 "large_bufsize": 135168, 00:24:03.420 "enable_numa": false 00:24:03.420 } 00:24:03.420 } 00:24:03.420 ] 00:24:03.420 }, 00:24:03.420 { 00:24:03.420 "subsystem": "sock", 00:24:03.420 "config": [ 00:24:03.420 { 00:24:03.420 "method": "sock_set_default_impl", 00:24:03.420 "params": { 00:24:03.420 "impl_name": "posix" 00:24:03.420 } 00:24:03.420 }, 00:24:03.420 { 00:24:03.420 "method": "sock_impl_set_options", 00:24:03.420 "params": { 00:24:03.420 "impl_name": "ssl", 00:24:03.420 "recv_buf_size": 4096, 00:24:03.420 "send_buf_size": 4096, 00:24:03.420 "enable_recv_pipe": true, 00:24:03.420 "enable_quickack": false, 00:24:03.420 "enable_placement_id": 0, 00:24:03.420 "enable_zerocopy_send_server": true, 00:24:03.420 "enable_zerocopy_send_client": false, 00:24:03.420 "zerocopy_threshold": 0, 00:24:03.420 "tls_version": 0, 00:24:03.420 "enable_ktls": false 00:24:03.420 } 00:24:03.420 }, 00:24:03.420 { 00:24:03.420 "method": "sock_impl_set_options", 00:24:03.420 "params": { 00:24:03.420 "impl_name": "posix", 00:24:03.420 "recv_buf_size": 2097152, 00:24:03.420 "send_buf_size": 2097152, 00:24:03.420 "enable_recv_pipe": true, 00:24:03.420 "enable_quickack": false, 00:24:03.420 "enable_placement_id": 0, 00:24:03.420 "enable_zerocopy_send_server": true, 00:24:03.420 "enable_zerocopy_send_client": false, 00:24:03.420 "zerocopy_threshold": 0, 00:24:03.420 "tls_version": 0, 00:24:03.420 "enable_ktls": false 00:24:03.420 } 00:24:03.420 } 00:24:03.420 ] 00:24:03.420 }, 00:24:03.420 { 00:24:03.420 "subsystem": "vmd", 00:24:03.420 "config": [] 00:24:03.420 }, 00:24:03.420 { 00:24:03.420 "subsystem": "accel", 00:24:03.420 "config": [ 00:24:03.420 { 00:24:03.420 "method": "accel_set_options", 00:24:03.420 "params": { 00:24:03.420 "small_cache_size": 128, 00:24:03.420 "large_cache_size": 16, 00:24:03.420 "task_count": 2048, 00:24:03.420 "sequence_count": 2048, 00:24:03.420 "buf_count": 2048 00:24:03.420 } 00:24:03.420 } 00:24:03.420 ] 00:24:03.420 }, 00:24:03.420 { 00:24:03.421 "subsystem": "bdev", 00:24:03.421 "config": [ 00:24:03.421 { 00:24:03.421 "method": "bdev_set_options", 00:24:03.421 "params": { 00:24:03.421 "bdev_io_pool_size": 65535, 00:24:03.421 "bdev_io_cache_size": 256, 00:24:03.421 "bdev_auto_examine": true, 00:24:03.421 "iobuf_small_cache_size": 128, 00:24:03.421 "iobuf_large_cache_size": 16 00:24:03.421 } 00:24:03.421 }, 00:24:03.421 { 00:24:03.421 "method": "bdev_raid_set_options", 00:24:03.421 "params": { 00:24:03.421 "process_window_size_kb": 1024, 00:24:03.421 "process_max_bandwidth_mb_sec": 0 00:24:03.421 } 00:24:03.421 }, 00:24:03.421 { 00:24:03.421 "method": "bdev_iscsi_set_options", 00:24:03.421 "params": { 00:24:03.421 "timeout_sec": 30 00:24:03.421 } 00:24:03.421 }, 00:24:03.421 { 00:24:03.421 "method": "bdev_nvme_set_options", 00:24:03.421 "params": { 00:24:03.421 "action_on_timeout": "none", 00:24:03.421 "timeout_us": 0, 00:24:03.421 "timeout_admin_us": 0, 00:24:03.421 "keep_alive_timeout_ms": 10000, 00:24:03.421 "arbitration_burst": 0, 00:24:03.421 "low_priority_weight": 0, 00:24:03.421 "medium_priority_weight": 0, 00:24:03.421 "high_priority_weight": 0, 00:24:03.421 "nvme_adminq_poll_period_us": 10000, 00:24:03.421 "nvme_ioq_poll_period_us": 0, 00:24:03.421 "io_queue_requests": 512, 00:24:03.421 "delay_cmd_submit": true, 00:24:03.421 "transport_retry_count": 4, 00:24:03.421 "bdev_retry_count": 3, 00:24:03.421 "transport_ack_timeout": 0, 00:24:03.421 "ctrlr_loss_timeout_sec": 0, 00:24:03.421 "reconnect_delay_sec": 0, 00:24:03.421 "fast_io_fail_timeout_sec": 0, 00:24:03.421 "disable_auto_failback": false, 00:24:03.421 "generate_uuids": false, 00:24:03.421 "transport_tos": 0, 00:24:03.421 "nvme_error_stat": false, 00:24:03.421 "rdma_srq_size": 0, 00:24:03.421 "io_path_stat": false, 00:24:03.421 "allow_accel_sequence": false, 00:24:03.421 "rdma_max_cq_size": 0, 00:24:03.421 "rdma_cm_event_timeout_ms": 0, 00:24:03.421 "dhchap_digests": [ 00:24:03.421 "sha256", 00:24:03.421 "sha384", 00:24:03.421 "sha512" 00:24:03.421 ], 00:24:03.421 "dhchap_dhgroups": [ 00:24:03.421 "null", 00:24:03.421 "ffdhe2048", 00:24:03.421 "ffdhe3072", 00:24:03.421 "ffdhe4096", 00:24:03.421 "ffdhe6144", 00:24:03.421 "ffdhe8192" 00:24:03.421 ] 00:24:03.421 } 00:24:03.421 }, 00:24:03.421 { 00:24:03.421 "method": "bdev_nvme_attach_controller", 00:24:03.421 "params": { 00:24:03.421 "name": "TLSTEST", 00:24:03.421 "trtype": "TCP", 00:24:03.421 "adrfam": "IPv4", 00:24:03.422 "traddr": "10.0.0.2", 00:24:03.422 "trsvcid": "4420", 00:24:03.422 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.422 "prchk_reftag": false, 00:24:03.422 "prchk_guard": false, 00:24:03.422 "ctrlr_loss_timeout_sec": 0, 00:24:03.422 "reconnect_delay_sec": 0, 00:24:03.422 "fast_io_fail_timeout_sec": 0, 00:24:03.422 "psk": "key0", 00:24:03.422 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:03.422 "hdgst": false, 00:24:03.422 "ddgst": false, 00:24:03.422 "multipath": "multipath" 00:24:03.422 } 00:24:03.422 }, 00:24:03.422 { 00:24:03.422 "method": "bdev_nvme_set_hotplug", 00:24:03.422 "params": { 00:24:03.422 "period_us": 100000, 00:24:03.422 "enable": false 00:24:03.422 } 00:24:03.422 }, 00:24:03.422 { 00:24:03.422 "method": "bdev_wait_for_examine" 00:24:03.422 } 00:24:03.422 ] 00:24:03.422 }, 00:24:03.422 { 00:24:03.422 "subsystem": "nbd", 00:24:03.422 "config": [] 00:24:03.422 } 00:24:03.422 ] 00:24:03.422 }' 00:24:03.422 11:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3003338 00:24:03.422 11:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3003338 ']' 00:24:03.422 11:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3003338 00:24:03.422 11:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:03.422 11:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:03.422 11:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3003338 00:24:03.682 11:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:03.682 11:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:03.682 11:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3003338' 00:24:03.682 killing process with pid 3003338 00:24:03.682 11:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3003338 00:24:03.682 Received shutdown signal, test time was about 10.000000 seconds 00:24:03.682 00:24:03.682 Latency(us) 00:24:03.682 [2024-11-18T10:52:29.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.682 [2024-11-18T10:52:29.567Z] =================================================================================================================== 00:24:03.682 [2024-11-18T10:52:29.567Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:03.682 11:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3003338 00:24:04.247 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3002921 00:24:04.247 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3002921 ']' 00:24:04.247 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3002921 00:24:04.247 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:04.247 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:04.247 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3002921 00:24:04.504 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:04.504 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:04.504 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3002921' 00:24:04.504 killing process with pid 3002921 00:24:04.504 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3002921 00:24:04.504 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3002921 00:24:05.883 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:05.883 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:05.883 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:24:05.883 "subsystems": [ 00:24:05.883 { 00:24:05.883 "subsystem": "keyring", 00:24:05.883 "config": [ 00:24:05.883 { 00:24:05.883 "method": "keyring_file_add_key", 00:24:05.883 "params": { 00:24:05.883 "name": "key0", 00:24:05.883 "path": "/tmp/tmp.PXOrMVQVpQ" 00:24:05.883 } 00:24:05.883 } 00:24:05.883 ] 00:24:05.883 }, 00:24:05.883 { 00:24:05.883 "subsystem": "iobuf", 00:24:05.883 "config": [ 00:24:05.883 { 00:24:05.883 "method": "iobuf_set_options", 00:24:05.883 "params": { 00:24:05.883 "small_pool_count": 8192, 00:24:05.883 "large_pool_count": 1024, 00:24:05.883 "small_bufsize": 8192, 00:24:05.883 "large_bufsize": 135168, 00:24:05.883 "enable_numa": false 00:24:05.883 } 00:24:05.883 } 00:24:05.883 ] 00:24:05.883 }, 00:24:05.883 { 00:24:05.883 "subsystem": "sock", 00:24:05.883 "config": [ 00:24:05.883 { 00:24:05.883 "method": "sock_set_default_impl", 00:24:05.883 "params": { 00:24:05.884 "impl_name": "posix" 00:24:05.884 } 00:24:05.884 }, 00:24:05.884 { 00:24:05.884 "method": "sock_impl_set_options", 00:24:05.884 "params": { 00:24:05.884 "impl_name": "ssl", 00:24:05.884 "recv_buf_size": 4096, 00:24:05.884 "send_buf_size": 4096, 00:24:05.884 "enable_recv_pipe": true, 00:24:05.884 "enable_quickack": false, 00:24:05.884 "enable_placement_id": 0, 00:24:05.884 "enable_zerocopy_send_server": true, 00:24:05.884 "enable_zerocopy_send_client": false, 00:24:05.884 "zerocopy_threshold": 0, 00:24:05.884 "tls_version": 0, 00:24:05.884 "enable_ktls": false 00:24:05.884 } 00:24:05.884 }, 00:24:05.884 { 00:24:05.884 "method": "sock_impl_set_options", 00:24:05.884 "params": { 00:24:05.884 "impl_name": "posix", 00:24:05.884 "recv_buf_size": 2097152, 00:24:05.884 "send_buf_size": 2097152, 00:24:05.884 "enable_recv_pipe": true, 00:24:05.884 "enable_quickack": false, 00:24:05.884 "enable_placement_id": 0, 00:24:05.884 "enable_zerocopy_send_server": true, 00:24:05.884 "enable_zerocopy_send_client": false, 00:24:05.884 "zerocopy_threshold": 0, 00:24:05.884 "tls_version": 0, 00:24:05.884 "enable_ktls": false 00:24:05.884 } 00:24:05.884 } 00:24:05.884 ] 00:24:05.884 }, 00:24:05.884 { 00:24:05.884 "subsystem": "vmd", 00:24:05.884 "config": [] 00:24:05.884 }, 00:24:05.884 { 00:24:05.884 "subsystem": "accel", 00:24:05.884 "config": [ 00:24:05.884 { 00:24:05.884 "method": "accel_set_options", 00:24:05.884 "params": { 00:24:05.884 "small_cache_size": 128, 00:24:05.884 "large_cache_size": 16, 00:24:05.884 "task_count": 2048, 00:24:05.884 "sequence_count": 2048, 00:24:05.884 "buf_count": 2048 00:24:05.884 } 00:24:05.884 } 00:24:05.884 ] 00:24:05.884 }, 00:24:05.884 { 00:24:05.884 "subsystem": "bdev", 00:24:05.884 "config": [ 00:24:05.884 { 00:24:05.884 "method": "bdev_set_options", 00:24:05.884 "params": { 00:24:05.884 "bdev_io_pool_size": 65535, 00:24:05.884 "bdev_io_cache_size": 256, 00:24:05.884 "bdev_auto_examine": true, 00:24:05.884 "iobuf_small_cache_size": 128, 00:24:05.884 "iobuf_large_cache_size": 16 00:24:05.884 } 00:24:05.884 }, 00:24:05.884 { 00:24:05.884 "method": "bdev_raid_set_options", 00:24:05.884 "params": { 00:24:05.884 "process_window_size_kb": 1024, 00:24:05.884 "process_max_bandwidth_mb_sec": 0 00:24:05.884 } 00:24:05.884 }, 00:24:05.884 { 00:24:05.884 "method": "bdev_iscsi_set_options", 00:24:05.884 "params": { 00:24:05.884 "timeout_sec": 30 00:24:05.884 } 00:24:05.884 }, 00:24:05.884 { 00:24:05.884 "method": "bdev_nvme_set_options", 00:24:05.884 "params": { 00:24:05.884 "action_on_timeout": "none", 00:24:05.884 "timeout_us": 0, 00:24:05.884 "timeout_admin_us": 0, 00:24:05.884 "keep_alive_timeout_ms": 10000, 00:24:05.884 "arbitration_burst": 0, 00:24:05.884 "low_priority_weight": 0, 00:24:05.884 "medium_priority_weight": 0, 00:24:05.884 "high_priority_weight": 0, 00:24:05.884 "nvme_adminq_poll_period_us": 10000, 00:24:05.884 "nvme_ioq_poll_period_us": 0, 00:24:05.884 "io_queue_requests": 0, 00:24:05.884 "delay_cmd_submit": true, 00:24:05.884 "transport_retry_count": 4, 00:24:05.884 "bdev_retry_count": 3, 00:24:05.884 "transport_ack_timeout": 0, 00:24:05.884 "ctrlr_loss_timeout_sec": 0, 00:24:05.884 "reconnect_delay_sec": 0, 00:24:05.884 "fast_io_fail_timeout_sec": 0, 00:24:05.884 "disable_auto_failback": false, 00:24:05.884 "generate_uuids": false, 00:24:05.884 "transport_tos": 0, 00:24:05.884 "nvme_error_stat": false, 00:24:05.884 "rdma_srq_size": 0, 00:24:05.884 "io_path_stat": false, 00:24:05.884 "allow_accel_sequence": false, 00:24:05.884 "rdma_max_cq_size": 0, 00:24:05.884 "rdma_cm_event_timeout_ms": 0, 00:24:05.884 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:05.884 "dhchap_digests": [ 00:24:05.884 "sha256", 00:24:05.884 "sha384", 00:24:05.884 "sha512" 00:24:05.884 ], 00:24:05.884 "dhchap_dhgroups": [ 00:24:05.884 "null", 00:24:05.884 "ffdhe2048", 00:24:05.884 "ffdhe3072", 00:24:05.884 "ffdhe4096", 00:24:05.884 "ffdhe6144", 00:24:05.884 "ffdhe8192" 00:24:05.884 ] 00:24:05.884 } 00:24:05.884 }, 00:24:05.884 { 00:24:05.884 "method": "bdev_nvme_set_hotplug", 00:24:05.884 "params": { 00:24:05.884 "period_us": 100000, 00:24:05.884 "enable": false 00:24:05.884 } 00:24:05.884 }, 00:24:05.884 { 00:24:05.884 "method": "bdev_malloc_create", 00:24:05.884 "params": { 00:24:05.884 "name": "malloc0", 00:24:05.884 "num_blocks": 8192, 00:24:05.884 "block_size": 4096, 00:24:05.884 "physical_block_size": 4096, 00:24:05.884 "uuid": "bb9ccc98-f0dc-47d4-81fe-862f7ee8e3be", 00:24:05.884 "optimal_io_boundary": 0, 00:24:05.884 "md_size": 0, 00:24:05.884 "dif_type": 0, 00:24:05.884 "dif_is_head_of_md": false, 00:24:05.884 "dif_pi_format": 0 00:24:05.884 } 00:24:05.884 }, 00:24:05.884 { 00:24:05.884 "method": "bdev_wait_for_examine" 00:24:05.884 } 00:24:05.884 ] 00:24:05.884 }, 00:24:05.884 { 00:24:05.884 "subsystem": "nbd", 00:24:05.884 "config": [] 00:24:05.884 }, 00:24:05.884 { 00:24:05.884 "subsystem": "scheduler", 00:24:05.884 "config": [ 00:24:05.884 { 00:24:05.884 "method": "framework_set_scheduler", 00:24:05.884 "params": { 00:24:05.884 "name": "static" 00:24:05.884 } 00:24:05.884 } 00:24:05.884 ] 00:24:05.884 }, 00:24:05.884 { 00:24:05.884 "subsystem": "nvmf", 00:24:05.884 "config": [ 00:24:05.884 { 00:24:05.884 "method": "nvmf_set_config", 00:24:05.884 "params": { 00:24:05.884 "discovery_filter": "match_any", 00:24:05.884 "admin_cmd_passthru": { 00:24:05.884 "identify_ctrlr": false 00:24:05.884 }, 00:24:05.884 "dhchap_digests": [ 00:24:05.884 "sha256", 00:24:05.884 "sha384", 00:24:05.884 "sha512" 00:24:05.884 ], 00:24:05.884 "dhchap_dhgroups": [ 00:24:05.884 "null", 00:24:05.884 "ffdhe2048", 00:24:05.885 "ffdhe3072", 00:24:05.885 "ffdhe4096", 00:24:05.885 "ffdhe6144", 00:24:05.885 "ffdhe8192" 00:24:05.885 ] 00:24:05.885 } 00:24:05.885 }, 00:24:05.885 { 00:24:05.885 "method": "nvmf_set_max_subsystems", 00:24:05.885 "params": { 00:24:05.885 "max_subsystems": 1024 00:24:05.885 } 00:24:05.885 }, 00:24:05.885 { 00:24:05.885 "method": "nvmf_set_crdt", 00:24:05.885 "params": { 00:24:05.885 "crdt1": 0, 00:24:05.885 "crdt2": 0, 00:24:05.885 "crdt3": 0 00:24:05.885 } 00:24:05.885 }, 00:24:05.885 { 00:24:05.885 "method": "nvmf_create_transport", 00:24:05.885 "params": { 00:24:05.885 "trtype": "TCP", 00:24:05.885 "max_queue_depth": 128, 00:24:05.885 "max_io_qpairs_per_ctrlr": 127, 00:24:05.885 "in_capsule_data_size": 4096, 00:24:05.885 "max_io_size": 131072, 00:24:05.885 "io_unit_size": 131072, 00:24:05.885 "max_aq_depth": 128, 00:24:05.885 "num_shared_buffers": 511, 00:24:05.885 "buf_cache_size": 4294967295, 00:24:05.885 "dif_insert_or_strip": false, 00:24:05.885 "zcopy": false, 00:24:05.885 "c2h_success": false, 00:24:05.885 "sock_priority": 0, 00:24:05.885 "abort_timeout_sec": 1, 00:24:05.885 "ack_timeout": 0, 00:24:05.885 "data_wr_pool_size": 0 00:24:05.885 } 00:24:05.885 }, 00:24:05.885 { 00:24:05.885 "method": "nvmf_create_subsystem", 00:24:05.885 "params": { 00:24:05.885 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:05.885 "allow_any_host": false, 00:24:05.885 "serial_number": "SPDK00000000000001", 00:24:05.885 "model_number": "SPDK bdev Controller", 00:24:05.885 "max_namespaces": 10, 00:24:05.885 "min_cntlid": 1, 00:24:05.885 "max_cntlid": 65519, 00:24:05.885 "ana_reporting": false 00:24:05.885 } 00:24:05.885 }, 00:24:05.885 { 00:24:05.885 "method": "nvmf_subsystem_add_host", 00:24:05.885 "params": { 00:24:05.885 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:05.885 "host": "nqn.2016-06.io.spdk:host1", 00:24:05.885 "psk": "key0" 00:24:05.885 } 00:24:05.885 }, 00:24:05.885 { 00:24:05.885 "method": "nvmf_subsystem_add_ns", 00:24:05.885 "params": { 00:24:05.885 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:05.885 "namespace": { 00:24:05.885 "nsid": 1, 00:24:05.885 "bdev_name": "malloc0", 00:24:05.885 "nguid": "BB9CCC98F0DC47D481FE862F7EE8E3BE", 00:24:05.885 "uuid": "bb9ccc98-f0dc-47d4-81fe-862f7ee8e3be", 00:24:05.885 "no_auto_visible": false 00:24:05.885 } 00:24:05.885 } 00:24:05.885 }, 00:24:05.885 { 00:24:05.885 "method": "nvmf_subsystem_add_listener", 00:24:05.885 "params": { 00:24:05.885 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:05.885 "listen_address": { 00:24:05.885 "trtype": "TCP", 00:24:05.885 "adrfam": "IPv4", 00:24:05.885 "traddr": "10.0.0.2", 00:24:05.885 "trsvcid": "4420" 00:24:05.885 }, 00:24:05.885 "secure_channel": true 00:24:05.885 } 00:24:05.885 } 00:24:05.885 ] 00:24:05.885 } 00:24:05.885 ] 00:24:05.885 }' 00:24:05.885 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:05.885 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3003879 00:24:05.885 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:05.885 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3003879 00:24:05.885 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3003879 ']' 00:24:05.885 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.885 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:05.885 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.885 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:05.885 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:05.885 [2024-11-18 11:52:31.505549] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:05.885 [2024-11-18 11:52:31.505714] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:05.885 [2024-11-18 11:52:31.672473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.145 [2024-11-18 11:52:31.806918] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:06.145 [2024-11-18 11:52:31.807019] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:06.145 [2024-11-18 11:52:31.807042] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:06.145 [2024-11-18 11:52:31.807062] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:06.145 [2024-11-18 11:52:31.807079] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:06.145 [2024-11-18 11:52:31.808668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:06.710 [2024-11-18 11:52:32.310660] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:06.710 [2024-11-18 11:52:32.342711] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:06.710 [2024-11-18 11:52:32.343087] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:06.710 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:06.710 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:06.710 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:06.710 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:06.710 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:06.711 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:06.711 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3004040 00:24:06.711 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3004040 /var/tmp/bdevperf.sock 00:24:06.711 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3004040 ']' 00:24:06.711 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:06.711 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:06.711 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:06.711 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:24:06.711 "subsystems": [ 00:24:06.711 { 00:24:06.711 "subsystem": "keyring", 00:24:06.711 "config": [ 00:24:06.711 { 00:24:06.711 "method": "keyring_file_add_key", 00:24:06.711 "params": { 00:24:06.711 "name": "key0", 00:24:06.711 "path": "/tmp/tmp.PXOrMVQVpQ" 00:24:06.711 } 00:24:06.711 } 00:24:06.711 ] 00:24:06.711 }, 00:24:06.711 { 00:24:06.711 "subsystem": "iobuf", 00:24:06.711 "config": [ 00:24:06.711 { 00:24:06.711 "method": "iobuf_set_options", 00:24:06.711 "params": { 00:24:06.711 "small_pool_count": 8192, 00:24:06.711 "large_pool_count": 1024, 00:24:06.711 "small_bufsize": 8192, 00:24:06.711 "large_bufsize": 135168, 00:24:06.711 "enable_numa": false 00:24:06.711 } 00:24:06.711 } 00:24:06.711 ] 00:24:06.711 }, 00:24:06.711 { 00:24:06.711 "subsystem": "sock", 00:24:06.711 "config": [ 00:24:06.711 { 00:24:06.711 "method": "sock_set_default_impl", 00:24:06.711 "params": { 00:24:06.711 "impl_name": "posix" 00:24:06.711 } 00:24:06.711 }, 00:24:06.711 { 00:24:06.711 "method": "sock_impl_set_options", 00:24:06.711 "params": { 00:24:06.711 "impl_name": "ssl", 00:24:06.711 "recv_buf_size": 4096, 00:24:06.711 "send_buf_size": 4096, 00:24:06.711 "enable_recv_pipe": true, 00:24:06.711 "enable_quickack": false, 00:24:06.711 "enable_placement_id": 0, 00:24:06.711 "enable_zerocopy_send_server": true, 00:24:06.711 "enable_zerocopy_send_client": false, 00:24:06.711 "zerocopy_threshold": 0, 00:24:06.711 "tls_version": 0, 00:24:06.711 "enable_ktls": false 00:24:06.711 } 00:24:06.711 }, 00:24:06.711 { 00:24:06.711 "method": "sock_impl_set_options", 00:24:06.711 "params": { 00:24:06.711 "impl_name": "posix", 00:24:06.711 "recv_buf_size": 2097152, 00:24:06.711 "send_buf_size": 2097152, 00:24:06.711 "enable_recv_pipe": true, 00:24:06.711 "enable_quickack": false, 00:24:06.711 "enable_placement_id": 0, 00:24:06.711 "enable_zerocopy_send_server": true, 00:24:06.711 "enable_zerocopy_send_client": false, 00:24:06.711 "zerocopy_threshold": 0, 00:24:06.711 "tls_version": 0, 00:24:06.711 "enable_ktls": false 00:24:06.711 } 00:24:06.711 } 00:24:06.711 ] 00:24:06.711 }, 00:24:06.711 { 00:24:06.711 "subsystem": "vmd", 00:24:06.711 "config": [] 00:24:06.711 }, 00:24:06.711 { 00:24:06.711 "subsystem": "accel", 00:24:06.711 "config": [ 00:24:06.711 { 00:24:06.711 "method": "accel_set_options", 00:24:06.711 "params": { 00:24:06.711 "small_cache_size": 128, 00:24:06.711 "large_cache_size": 16, 00:24:06.711 "task_count": 2048, 00:24:06.711 "sequence_count": 2048, 00:24:06.711 "buf_count": 2048 00:24:06.711 } 00:24:06.711 } 00:24:06.711 ] 00:24:06.711 }, 00:24:06.711 { 00:24:06.711 "subsystem": "bdev", 00:24:06.711 "config": [ 00:24:06.711 { 00:24:06.711 "method": "bdev_set_options", 00:24:06.711 "params": { 00:24:06.711 "bdev_io_pool_size": 65535, 00:24:06.711 "bdev_io_cache_size": 256, 00:24:06.711 "bdev_auto_examine": true, 00:24:06.711 "iobuf_small_cache_size": 128, 00:24:06.711 "iobuf_large_cache_size": 16 00:24:06.711 } 00:24:06.711 }, 00:24:06.711 { 00:24:06.711 "method": "bdev_raid_set_options", 00:24:06.711 "params": { 00:24:06.711 "process_window_size_kb": 1024, 00:24:06.711 "process_max_bandwidth_mb_sec": 0 00:24:06.711 } 00:24:06.711 }, 00:24:06.711 { 00:24:06.711 "method": "bdev_iscsi_set_options", 00:24:06.711 "params": { 00:24:06.711 "timeout_sec": 30 00:24:06.711 } 00:24:06.711 }, 00:24:06.711 { 00:24:06.711 "method": "bdev_nvme_set_options", 00:24:06.711 "params": { 00:24:06.711 "action_on_timeout": "none", 00:24:06.711 "timeout_us": 0, 00:24:06.711 "timeout_admin_us": 0, 00:24:06.711 "keep_alive_timeout_ms": 10000, 00:24:06.711 "arbitration_burst": 0, 00:24:06.711 "low_priority_weight": 0, 00:24:06.711 "medium_priority_weight": 0, 00:24:06.711 "high_priority_weight": 0, 00:24:06.711 "nvme_adminq_poll_period_us": 10000, 00:24:06.711 "nvme_ioq_poll_period_us": 0, 00:24:06.711 "io_queue_requests": 512, 00:24:06.711 "delay_cmd_submit": true, 00:24:06.711 "transport_retry_count": 4, 00:24:06.711 "bdev_retry_count": 3, 00:24:06.711 "transport_ack_timeout": 0, 00:24:06.711 "ctrlr_loss_timeout_sec": 0, 00:24:06.711 "reconnect_delay_sec": 0, 00:24:06.711 "fast_io_fail_timeout_sec": 0, 00:24:06.711 "disable_auto_failback": false, 00:24:06.711 "generate_uuids": false, 00:24:06.711 "transport_tos": 0, 00:24:06.711 "nvme_error_stat": false, 00:24:06.711 "rdma_srq_size": 0, 00:24:06.711 "io_path_stat": false, 00:24:06.711 "allow_accel_sequence": false, 00:24:06.711 "rdma_max_cq_size": 0, 00:24:06.711 "rdma_cm_event_timeout_ms": 0 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:06.711 , 00:24:06.711 "dhchap_digests": [ 00:24:06.711 "sha256", 00:24:06.711 "sha384", 00:24:06.711 "sha512" 00:24:06.711 ], 00:24:06.711 "dhchap_dhgroups": [ 00:24:06.711 "null", 00:24:06.711 "ffdhe2048", 00:24:06.711 "ffdhe3072", 00:24:06.711 "ffdhe4096", 00:24:06.711 "ffdhe6144", 00:24:06.711 "ffdhe8192" 00:24:06.711 ] 00:24:06.711 } 00:24:06.711 }, 00:24:06.711 { 00:24:06.711 "method": "bdev_nvme_attach_controller", 00:24:06.711 "params": { 00:24:06.711 "name": "TLSTEST", 00:24:06.711 "trtype": "TCP", 00:24:06.711 "adrfam": "IPv4", 00:24:06.711 "traddr": "10.0.0.2", 00:24:06.711 "trsvcid": "4420", 00:24:06.711 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:06.711 "prchk_reftag": false, 00:24:06.711 "prchk_guard": false, 00:24:06.711 "ctrlr_loss_timeout_sec": 0, 00:24:06.711 "reconnect_delay_sec": 0, 00:24:06.711 "fast_io_fail_timeout_sec": 0, 00:24:06.711 "psk": "key0", 00:24:06.711 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:06.711 "hdgst": false, 00:24:06.711 "ddgst": false, 00:24:06.711 "multipath": "multipath" 00:24:06.711 } 00:24:06.711 }, 00:24:06.711 { 00:24:06.711 "method": "bdev_nvme_set_hotplug", 00:24:06.711 "params": { 00:24:06.711 "period_us": 100000, 00:24:06.711 "enable": false 00:24:06.711 } 00:24:06.711 }, 00:24:06.711 { 00:24:06.711 "method": "bdev_wait_for_examine" 00:24:06.711 } 00:24:06.711 ] 00:24:06.711 }, 00:24:06.711 { 00:24:06.711 "subsystem": "nbd", 00:24:06.711 "config": [] 00:24:06.711 } 00:24:06.711 ] 00:24:06.711 }' 00:24:06.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:06.711 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:06.711 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:06.971 [2024-11-18 11:52:32.604288] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:06.971 [2024-11-18 11:52:32.604411] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3004040 ] 00:24:06.971 [2024-11-18 11:52:32.735213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.971 [2024-11-18 11:52:32.853213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:07.540 [2024-11-18 11:52:33.255026] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:07.798 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:07.798 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:07.798 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:08.055 Running I/O for 10 seconds... 00:24:09.929 2673.00 IOPS, 10.44 MiB/s [2024-11-18T10:52:36.751Z] 2691.00 IOPS, 10.51 MiB/s [2024-11-18T10:52:38.122Z] 2691.33 IOPS, 10.51 MiB/s [2024-11-18T10:52:39.058Z] 2715.25 IOPS, 10.61 MiB/s [2024-11-18T10:52:39.993Z] 2728.00 IOPS, 10.66 MiB/s [2024-11-18T10:52:40.927Z] 2724.83 IOPS, 10.64 MiB/s [2024-11-18T10:52:41.864Z] 2731.43 IOPS, 10.67 MiB/s [2024-11-18T10:52:42.803Z] 2733.12 IOPS, 10.68 MiB/s [2024-11-18T10:52:43.736Z] 2734.33 IOPS, 10.68 MiB/s [2024-11-18T10:52:43.994Z] 2737.70 IOPS, 10.69 MiB/s 00:24:18.109 Latency(us) 00:24:18.109 [2024-11-18T10:52:43.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.109 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:18.109 Verification LBA range: start 0x0 length 0x2000 00:24:18.109 TLSTESTn1 : 10.03 2742.99 10.71 0.00 0.00 46579.17 7912.87 36117.62 00:24:18.109 [2024-11-18T10:52:43.994Z] =================================================================================================================== 00:24:18.109 [2024-11-18T10:52:43.994Z] Total : 2742.99 10.71 0.00 0.00 46579.17 7912.87 36117.62 00:24:18.109 { 00:24:18.109 "results": [ 00:24:18.109 { 00:24:18.109 "job": "TLSTESTn1", 00:24:18.109 "core_mask": "0x4", 00:24:18.109 "workload": "verify", 00:24:18.109 "status": "finished", 00:24:18.109 "verify_range": { 00:24:18.109 "start": 0, 00:24:18.109 "length": 8192 00:24:18.109 }, 00:24:18.109 "queue_depth": 128, 00:24:18.109 "io_size": 4096, 00:24:18.109 "runtime": 10.026283, 00:24:18.109 "iops": 2742.9905978117713, 00:24:18.109 "mibps": 10.714807022702232, 00:24:18.109 "io_failed": 0, 00:24:18.109 "io_timeout": 0, 00:24:18.109 "avg_latency_us": 46579.16608289768, 00:24:18.109 "min_latency_us": 7912.8651851851855, 00:24:18.109 "max_latency_us": 36117.61777777778 00:24:18.109 } 00:24:18.109 ], 00:24:18.109 "core_count": 1 00:24:18.109 } 00:24:18.109 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:18.109 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3004040 00:24:18.109 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3004040 ']' 00:24:18.109 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3004040 00:24:18.109 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:18.109 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:18.109 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3004040 00:24:18.109 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:18.109 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:18.109 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3004040' 00:24:18.109 killing process with pid 3004040 00:24:18.109 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3004040 00:24:18.109 Received shutdown signal, test time was about 10.000000 seconds 00:24:18.109 00:24:18.109 Latency(us) 00:24:18.109 [2024-11-18T10:52:43.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.109 [2024-11-18T10:52:43.994Z] =================================================================================================================== 00:24:18.109 [2024-11-18T10:52:43.994Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:18.109 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3004040 00:24:19.074 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3003879 00:24:19.074 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3003879 ']' 00:24:19.074 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3003879 00:24:19.074 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:19.074 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:19.074 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3003879 00:24:19.074 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:19.074 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:19.074 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3003879' 00:24:19.074 killing process with pid 3003879 00:24:19.074 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3003879 00:24:19.074 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3003879 00:24:20.447 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:20.447 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:20.447 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:20.447 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:20.447 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3005566 00:24:20.447 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:20.447 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3005566 00:24:20.447 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3005566 ']' 00:24:20.447 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.447 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:20.447 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.447 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:20.447 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:20.447 [2024-11-18 11:52:46.060564] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:20.447 [2024-11-18 11:52:46.060703] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.447 [2024-11-18 11:52:46.207646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.705 [2024-11-18 11:52:46.343954] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.705 [2024-11-18 11:52:46.344055] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.705 [2024-11-18 11:52:46.344081] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:20.705 [2024-11-18 11:52:46.344105] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:20.705 [2024-11-18 11:52:46.344125] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.705 [2024-11-18 11:52:46.345762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.271 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:21.271 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:21.271 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:21.271 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:21.271 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:21.271 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.271 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.PXOrMVQVpQ 00:24:21.271 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.PXOrMVQVpQ 00:24:21.271 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:21.533 [2024-11-18 11:52:47.356209] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:21.533 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:21.791 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:22.050 [2024-11-18 11:52:47.901744] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:22.050 [2024-11-18 11:52:47.902133] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:22.050 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:22.616 malloc0 00:24:22.616 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:22.616 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.PXOrMVQVpQ 00:24:23.183 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:23.183 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3005922 00:24:23.183 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:23.183 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:23.183 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3005922 /var/tmp/bdevperf.sock 00:24:23.183 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3005922 ']' 00:24:23.183 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:23.183 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:23.183 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:23.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:23.183 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:23.183 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:23.448 [2024-11-18 11:52:49.119177] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:23.449 [2024-11-18 11:52:49.119323] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3005922 ] 00:24:23.449 [2024-11-18 11:52:49.261928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.710 [2024-11-18 11:52:49.398645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.274 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:24.274 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:24.274 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PXOrMVQVpQ 00:24:24.532 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:24.789 [2024-11-18 11:52:50.654332] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:25.047 nvme0n1 00:24:25.047 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:25.047 Running I/O for 1 seconds... 00:24:26.422 1931.00 IOPS, 7.54 MiB/s 00:24:26.422 Latency(us) 00:24:26.422 [2024-11-18T10:52:52.307Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.422 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:26.422 Verification LBA range: start 0x0 length 0x2000 00:24:26.422 nvme0n1 : 1.03 1999.31 7.81 0.00 0.00 63181.15 13883.92 57865.86 00:24:26.422 [2024-11-18T10:52:52.307Z] =================================================================================================================== 00:24:26.422 [2024-11-18T10:52:52.307Z] Total : 1999.31 7.81 0.00 0.00 63181.15 13883.92 57865.86 00:24:26.422 { 00:24:26.422 "results": [ 00:24:26.422 { 00:24:26.422 "job": "nvme0n1", 00:24:26.422 "core_mask": "0x2", 00:24:26.422 "workload": "verify", 00:24:26.422 "status": "finished", 00:24:26.422 "verify_range": { 00:24:26.422 "start": 0, 00:24:26.422 "length": 8192 00:24:26.422 }, 00:24:26.422 "queue_depth": 128, 00:24:26.422 "io_size": 4096, 00:24:26.422 "runtime": 1.029856, 00:24:26.422 "iops": 1999.3086412080913, 00:24:26.422 "mibps": 7.809799379719107, 00:24:26.422 "io_failed": 0, 00:24:26.422 "io_timeout": 0, 00:24:26.422 "avg_latency_us": 63181.14591981005, 00:24:26.422 "min_latency_us": 13883.922962962963, 00:24:26.422 "max_latency_us": 57865.86074074074 00:24:26.422 } 00:24:26.422 ], 00:24:26.422 "core_count": 1 00:24:26.422 } 00:24:26.422 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3005922 00:24:26.422 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3005922 ']' 00:24:26.422 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3005922 00:24:26.422 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:26.422 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:26.422 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3005922 00:24:26.422 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:26.422 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:26.422 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3005922' 00:24:26.422 killing process with pid 3005922 00:24:26.422 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3005922 00:24:26.422 Received shutdown signal, test time was about 1.000000 seconds 00:24:26.422 00:24:26.422 Latency(us) 00:24:26.422 [2024-11-18T10:52:52.307Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.422 [2024-11-18T10:52:52.307Z] =================================================================================================================== 00:24:26.422 [2024-11-18T10:52:52.307Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:26.422 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3005922 00:24:26.989 11:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3005566 00:24:26.989 11:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3005566 ']' 00:24:26.989 11:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3005566 00:24:26.989 11:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:26.989 11:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:26.989 11:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3005566 00:24:27.246 11:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:27.246 11:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:27.246 11:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3005566' 00:24:27.246 killing process with pid 3005566 00:24:27.246 11:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3005566 00:24:27.247 11:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3005566 00:24:28.618 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:28.618 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:28.618 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:28.618 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:28.618 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3006586 00:24:28.618 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:28.618 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3006586 00:24:28.618 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3006586 ']' 00:24:28.618 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.618 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:28.618 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.618 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:28.619 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:28.619 [2024-11-18 11:52:54.188556] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:28.619 [2024-11-18 11:52:54.188709] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.619 [2024-11-18 11:52:54.329891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.619 [2024-11-18 11:52:54.452387] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.619 [2024-11-18 11:52:54.452486] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.619 [2024-11-18 11:52:54.452517] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:28.619 [2024-11-18 11:52:54.452539] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:28.619 [2024-11-18 11:52:54.452555] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.619 [2024-11-18 11:52:54.454051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.553 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:29.553 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:29.553 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:29.553 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:29.553 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:29.553 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:29.553 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:29.553 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.553 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:29.553 [2024-11-18 11:52:55.220642] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:29.553 malloc0 00:24:29.553 [2024-11-18 11:52:55.282210] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:29.553 [2024-11-18 11:52:55.282611] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:29.553 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.553 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3006737 00:24:29.553 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3006737 /var/tmp/bdevperf.sock 00:24:29.553 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3006737 ']' 00:24:29.553 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:29.553 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:29.553 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:29.553 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:29.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:29.553 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:29.553 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:29.553 [2024-11-18 11:52:55.395176] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:29.553 [2024-11-18 11:52:55.395305] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3006737 ] 00:24:29.811 [2024-11-18 11:52:55.529345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.811 [2024-11-18 11:52:55.658132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:30.744 11:52:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:30.744 11:52:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:30.744 11:52:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PXOrMVQVpQ 00:24:31.001 11:52:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:31.259 [2024-11-18 11:52:56.894863] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:31.259 nvme0n1 00:24:31.259 11:52:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:31.259 Running I/O for 1 seconds... 00:24:32.630 2229.00 IOPS, 8.71 MiB/s 00:24:32.630 Latency(us) 00:24:32.630 [2024-11-18T10:52:58.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.630 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:32.630 Verification LBA range: start 0x0 length 0x2000 00:24:32.630 nvme0n1 : 1.04 2261.78 8.84 0.00 0.00 55559.26 8835.22 50098.63 00:24:32.630 [2024-11-18T10:52:58.515Z] =================================================================================================================== 00:24:32.630 [2024-11-18T10:52:58.515Z] Total : 2261.78 8.84 0.00 0.00 55559.26 8835.22 50098.63 00:24:32.630 { 00:24:32.630 "results": [ 00:24:32.630 { 00:24:32.630 "job": "nvme0n1", 00:24:32.630 "core_mask": "0x2", 00:24:32.630 "workload": "verify", 00:24:32.630 "status": "finished", 00:24:32.630 "verify_range": { 00:24:32.630 "start": 0, 00:24:32.630 "length": 8192 00:24:32.630 }, 00:24:32.630 "queue_depth": 128, 00:24:32.630 "io_size": 4096, 00:24:32.630 "runtime": 1.042098, 00:24:32.630 "iops": 2261.78344071287, 00:24:32.630 "mibps": 8.835091565284648, 00:24:32.630 "io_failed": 0, 00:24:32.630 "io_timeout": 0, 00:24:32.630 "avg_latency_us": 55559.264174484204, 00:24:32.630 "min_latency_us": 8835.223703703703, 00:24:32.630 "max_latency_us": 50098.63111111111 00:24:32.630 } 00:24:32.630 ], 00:24:32.630 "core_count": 1 00:24:32.630 } 00:24:32.630 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:32.630 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.630 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:32.630 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.630 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:32.630 "subsystems": [ 00:24:32.630 { 00:24:32.630 "subsystem": "keyring", 00:24:32.630 "config": [ 00:24:32.630 { 00:24:32.630 "method": "keyring_file_add_key", 00:24:32.630 "params": { 00:24:32.630 "name": "key0", 00:24:32.630 "path": "/tmp/tmp.PXOrMVQVpQ" 00:24:32.630 } 00:24:32.630 } 00:24:32.630 ] 00:24:32.630 }, 00:24:32.630 { 00:24:32.630 "subsystem": "iobuf", 00:24:32.630 "config": [ 00:24:32.630 { 00:24:32.630 "method": "iobuf_set_options", 00:24:32.630 "params": { 00:24:32.630 "small_pool_count": 8192, 00:24:32.630 "large_pool_count": 1024, 00:24:32.630 "small_bufsize": 8192, 00:24:32.630 "large_bufsize": 135168, 00:24:32.630 "enable_numa": false 00:24:32.630 } 00:24:32.630 } 00:24:32.630 ] 00:24:32.630 }, 00:24:32.630 { 00:24:32.630 "subsystem": "sock", 00:24:32.630 "config": [ 00:24:32.630 { 00:24:32.630 "method": "sock_set_default_impl", 00:24:32.630 "params": { 00:24:32.630 "impl_name": "posix" 00:24:32.630 } 00:24:32.630 }, 00:24:32.630 { 00:24:32.630 "method": "sock_impl_set_options", 00:24:32.630 "params": { 00:24:32.630 "impl_name": "ssl", 00:24:32.630 "recv_buf_size": 4096, 00:24:32.630 "send_buf_size": 4096, 00:24:32.630 "enable_recv_pipe": true, 00:24:32.630 "enable_quickack": false, 00:24:32.630 "enable_placement_id": 0, 00:24:32.630 "enable_zerocopy_send_server": true, 00:24:32.630 "enable_zerocopy_send_client": false, 00:24:32.630 "zerocopy_threshold": 0, 00:24:32.630 "tls_version": 0, 00:24:32.630 "enable_ktls": false 00:24:32.630 } 00:24:32.630 }, 00:24:32.630 { 00:24:32.630 "method": "sock_impl_set_options", 00:24:32.630 "params": { 00:24:32.630 "impl_name": "posix", 00:24:32.630 "recv_buf_size": 2097152, 00:24:32.630 "send_buf_size": 2097152, 00:24:32.630 "enable_recv_pipe": true, 00:24:32.630 "enable_quickack": false, 00:24:32.630 "enable_placement_id": 0, 00:24:32.630 "enable_zerocopy_send_server": true, 00:24:32.630 "enable_zerocopy_send_client": false, 00:24:32.630 "zerocopy_threshold": 0, 00:24:32.630 "tls_version": 0, 00:24:32.630 "enable_ktls": false 00:24:32.630 } 00:24:32.630 } 00:24:32.630 ] 00:24:32.630 }, 00:24:32.630 { 00:24:32.630 "subsystem": "vmd", 00:24:32.630 "config": [] 00:24:32.630 }, 00:24:32.630 { 00:24:32.630 "subsystem": "accel", 00:24:32.630 "config": [ 00:24:32.630 { 00:24:32.630 "method": "accel_set_options", 00:24:32.630 "params": { 00:24:32.630 "small_cache_size": 128, 00:24:32.630 "large_cache_size": 16, 00:24:32.630 "task_count": 2048, 00:24:32.630 "sequence_count": 2048, 00:24:32.630 "buf_count": 2048 00:24:32.630 } 00:24:32.630 } 00:24:32.630 ] 00:24:32.630 }, 00:24:32.630 { 00:24:32.630 "subsystem": "bdev", 00:24:32.630 "config": [ 00:24:32.630 { 00:24:32.630 "method": "bdev_set_options", 00:24:32.630 "params": { 00:24:32.630 "bdev_io_pool_size": 65535, 00:24:32.630 "bdev_io_cache_size": 256, 00:24:32.630 "bdev_auto_examine": true, 00:24:32.631 "iobuf_small_cache_size": 128, 00:24:32.631 "iobuf_large_cache_size": 16 00:24:32.631 } 00:24:32.631 }, 00:24:32.631 { 00:24:32.631 "method": "bdev_raid_set_options", 00:24:32.631 "params": { 00:24:32.631 "process_window_size_kb": 1024, 00:24:32.631 "process_max_bandwidth_mb_sec": 0 00:24:32.631 } 00:24:32.631 }, 00:24:32.631 { 00:24:32.631 "method": "bdev_iscsi_set_options", 00:24:32.631 "params": { 00:24:32.631 "timeout_sec": 30 00:24:32.631 } 00:24:32.631 }, 00:24:32.631 { 00:24:32.631 "method": "bdev_nvme_set_options", 00:24:32.631 "params": { 00:24:32.631 "action_on_timeout": "none", 00:24:32.631 "timeout_us": 0, 00:24:32.631 "timeout_admin_us": 0, 00:24:32.631 "keep_alive_timeout_ms": 10000, 00:24:32.631 "arbitration_burst": 0, 00:24:32.631 "low_priority_weight": 0, 00:24:32.631 "medium_priority_weight": 0, 00:24:32.631 "high_priority_weight": 0, 00:24:32.631 "nvme_adminq_poll_period_us": 10000, 00:24:32.631 "nvme_ioq_poll_period_us": 0, 00:24:32.631 "io_queue_requests": 0, 00:24:32.631 "delay_cmd_submit": true, 00:24:32.631 "transport_retry_count": 4, 00:24:32.631 "bdev_retry_count": 3, 00:24:32.631 "transport_ack_timeout": 0, 00:24:32.631 "ctrlr_loss_timeout_sec": 0, 00:24:32.631 "reconnect_delay_sec": 0, 00:24:32.631 "fast_io_fail_timeout_sec": 0, 00:24:32.631 "disable_auto_failback": false, 00:24:32.631 "generate_uuids": false, 00:24:32.631 "transport_tos": 0, 00:24:32.631 "nvme_error_stat": false, 00:24:32.631 "rdma_srq_size": 0, 00:24:32.631 "io_path_stat": false, 00:24:32.631 "allow_accel_sequence": false, 00:24:32.631 "rdma_max_cq_size": 0, 00:24:32.631 "rdma_cm_event_timeout_ms": 0, 00:24:32.631 "dhchap_digests": [ 00:24:32.631 "sha256", 00:24:32.631 "sha384", 00:24:32.631 "sha512" 00:24:32.631 ], 00:24:32.631 "dhchap_dhgroups": [ 00:24:32.631 "null", 00:24:32.631 "ffdhe2048", 00:24:32.631 "ffdhe3072", 00:24:32.631 "ffdhe4096", 00:24:32.631 "ffdhe6144", 00:24:32.631 "ffdhe8192" 00:24:32.631 ] 00:24:32.631 } 00:24:32.631 }, 00:24:32.631 { 00:24:32.631 "method": "bdev_nvme_set_hotplug", 00:24:32.631 "params": { 00:24:32.631 "period_us": 100000, 00:24:32.631 "enable": false 00:24:32.631 } 00:24:32.631 }, 00:24:32.631 { 00:24:32.631 "method": "bdev_malloc_create", 00:24:32.631 "params": { 00:24:32.631 "name": "malloc0", 00:24:32.631 "num_blocks": 8192, 00:24:32.631 "block_size": 4096, 00:24:32.631 "physical_block_size": 4096, 00:24:32.631 "uuid": "7242ffa8-8288-4b1b-afe4-d9acf673e342", 00:24:32.631 "optimal_io_boundary": 0, 00:24:32.631 "md_size": 0, 00:24:32.631 "dif_type": 0, 00:24:32.631 "dif_is_head_of_md": false, 00:24:32.631 "dif_pi_format": 0 00:24:32.631 } 00:24:32.631 }, 00:24:32.631 { 00:24:32.631 "method": "bdev_wait_for_examine" 00:24:32.631 } 00:24:32.631 ] 00:24:32.631 }, 00:24:32.631 { 00:24:32.631 "subsystem": "nbd", 00:24:32.631 "config": [] 00:24:32.631 }, 00:24:32.631 { 00:24:32.631 "subsystem": "scheduler", 00:24:32.631 "config": [ 00:24:32.631 { 00:24:32.631 "method": "framework_set_scheduler", 00:24:32.631 "params": { 00:24:32.631 "name": "static" 00:24:32.631 } 00:24:32.631 } 00:24:32.631 ] 00:24:32.631 }, 00:24:32.631 { 00:24:32.631 "subsystem": "nvmf", 00:24:32.631 "config": [ 00:24:32.631 { 00:24:32.631 "method": "nvmf_set_config", 00:24:32.631 "params": { 00:24:32.631 "discovery_filter": "match_any", 00:24:32.631 "admin_cmd_passthru": { 00:24:32.631 "identify_ctrlr": false 00:24:32.631 }, 00:24:32.631 "dhchap_digests": [ 00:24:32.631 "sha256", 00:24:32.631 "sha384", 00:24:32.631 "sha512" 00:24:32.631 ], 00:24:32.631 "dhchap_dhgroups": [ 00:24:32.631 "null", 00:24:32.631 "ffdhe2048", 00:24:32.631 "ffdhe3072", 00:24:32.631 "ffdhe4096", 00:24:32.631 "ffdhe6144", 00:24:32.631 "ffdhe8192" 00:24:32.631 ] 00:24:32.631 } 00:24:32.631 }, 00:24:32.631 { 00:24:32.631 "method": "nvmf_set_max_subsystems", 00:24:32.631 "params": { 00:24:32.631 "max_subsystems": 1024 00:24:32.631 } 00:24:32.631 }, 00:24:32.631 { 00:24:32.631 "method": "nvmf_set_crdt", 00:24:32.631 "params": { 00:24:32.631 "crdt1": 0, 00:24:32.631 "crdt2": 0, 00:24:32.631 "crdt3": 0 00:24:32.631 } 00:24:32.631 }, 00:24:32.631 { 00:24:32.631 "method": "nvmf_create_transport", 00:24:32.631 "params": { 00:24:32.631 "trtype": "TCP", 00:24:32.631 "max_queue_depth": 128, 00:24:32.631 "max_io_qpairs_per_ctrlr": 127, 00:24:32.631 "in_capsule_data_size": 4096, 00:24:32.631 "max_io_size": 131072, 00:24:32.631 "io_unit_size": 131072, 00:24:32.631 "max_aq_depth": 128, 00:24:32.631 "num_shared_buffers": 511, 00:24:32.631 "buf_cache_size": 4294967295, 00:24:32.631 "dif_insert_or_strip": false, 00:24:32.631 "zcopy": false, 00:24:32.631 "c2h_success": false, 00:24:32.631 "sock_priority": 0, 00:24:32.631 "abort_timeout_sec": 1, 00:24:32.631 "ack_timeout": 0, 00:24:32.631 "data_wr_pool_size": 0 00:24:32.631 } 00:24:32.631 }, 00:24:32.631 { 00:24:32.631 "method": "nvmf_create_subsystem", 00:24:32.631 "params": { 00:24:32.631 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:32.631 "allow_any_host": false, 00:24:32.631 "serial_number": "00000000000000000000", 00:24:32.631 "model_number": "SPDK bdev Controller", 00:24:32.631 "max_namespaces": 32, 00:24:32.631 "min_cntlid": 1, 00:24:32.631 "max_cntlid": 65519, 00:24:32.631 "ana_reporting": false 00:24:32.631 } 00:24:32.631 }, 00:24:32.631 { 00:24:32.631 "method": "nvmf_subsystem_add_host", 00:24:32.631 "params": { 00:24:32.631 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:32.631 "host": "nqn.2016-06.io.spdk:host1", 00:24:32.631 "psk": "key0" 00:24:32.631 } 00:24:32.631 }, 00:24:32.631 { 00:24:32.631 "method": "nvmf_subsystem_add_ns", 00:24:32.631 "params": { 00:24:32.631 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:32.631 "namespace": { 00:24:32.631 "nsid": 1, 00:24:32.631 "bdev_name": "malloc0", 00:24:32.631 "nguid": "7242FFA882884B1BAFE4D9ACF673E342", 00:24:32.631 "uuid": "7242ffa8-8288-4b1b-afe4-d9acf673e342", 00:24:32.631 "no_auto_visible": false 00:24:32.631 } 00:24:32.631 } 00:24:32.631 }, 00:24:32.631 { 00:24:32.631 "method": "nvmf_subsystem_add_listener", 00:24:32.631 "params": { 00:24:32.631 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:32.631 "listen_address": { 00:24:32.631 "trtype": "TCP", 00:24:32.631 "adrfam": "IPv4", 00:24:32.631 "traddr": "10.0.0.2", 00:24:32.631 "trsvcid": "4420" 00:24:32.631 }, 00:24:32.631 "secure_channel": false, 00:24:32.631 "sock_impl": "ssl" 00:24:32.631 } 00:24:32.631 } 00:24:32.631 ] 00:24:32.631 } 00:24:32.631 ] 00:24:32.631 }' 00:24:32.631 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:32.888 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:32.888 "subsystems": [ 00:24:32.888 { 00:24:32.889 "subsystem": "keyring", 00:24:32.889 "config": [ 00:24:32.889 { 00:24:32.889 "method": "keyring_file_add_key", 00:24:32.889 "params": { 00:24:32.889 "name": "key0", 00:24:32.889 "path": "/tmp/tmp.PXOrMVQVpQ" 00:24:32.889 } 00:24:32.889 } 00:24:32.889 ] 00:24:32.889 }, 00:24:32.889 { 00:24:32.889 "subsystem": "iobuf", 00:24:32.889 "config": [ 00:24:32.889 { 00:24:32.889 "method": "iobuf_set_options", 00:24:32.889 "params": { 00:24:32.889 "small_pool_count": 8192, 00:24:32.889 "large_pool_count": 1024, 00:24:32.889 "small_bufsize": 8192, 00:24:32.889 "large_bufsize": 135168, 00:24:32.889 "enable_numa": false 00:24:32.889 } 00:24:32.889 } 00:24:32.889 ] 00:24:32.889 }, 00:24:32.889 { 00:24:32.889 "subsystem": "sock", 00:24:32.889 "config": [ 00:24:32.889 { 00:24:32.889 "method": "sock_set_default_impl", 00:24:32.889 "params": { 00:24:32.889 "impl_name": "posix" 00:24:32.889 } 00:24:32.889 }, 00:24:32.889 { 00:24:32.889 "method": "sock_impl_set_options", 00:24:32.889 "params": { 00:24:32.889 "impl_name": "ssl", 00:24:32.889 "recv_buf_size": 4096, 00:24:32.889 "send_buf_size": 4096, 00:24:32.889 "enable_recv_pipe": true, 00:24:32.889 "enable_quickack": false, 00:24:32.889 "enable_placement_id": 0, 00:24:32.889 "enable_zerocopy_send_server": true, 00:24:32.889 "enable_zerocopy_send_client": false, 00:24:32.889 "zerocopy_threshold": 0, 00:24:32.889 "tls_version": 0, 00:24:32.889 "enable_ktls": false 00:24:32.889 } 00:24:32.889 }, 00:24:32.889 { 00:24:32.889 "method": "sock_impl_set_options", 00:24:32.889 "params": { 00:24:32.889 "impl_name": "posix", 00:24:32.889 "recv_buf_size": 2097152, 00:24:32.889 "send_buf_size": 2097152, 00:24:32.889 "enable_recv_pipe": true, 00:24:32.889 "enable_quickack": false, 00:24:32.889 "enable_placement_id": 0, 00:24:32.889 "enable_zerocopy_send_server": true, 00:24:32.889 "enable_zerocopy_send_client": false, 00:24:32.889 "zerocopy_threshold": 0, 00:24:32.889 "tls_version": 0, 00:24:32.889 "enable_ktls": false 00:24:32.889 } 00:24:32.889 } 00:24:32.889 ] 00:24:32.889 }, 00:24:32.889 { 00:24:32.889 "subsystem": "vmd", 00:24:32.889 "config": [] 00:24:32.889 }, 00:24:32.889 { 00:24:32.889 "subsystem": "accel", 00:24:32.889 "config": [ 00:24:32.889 { 00:24:32.889 "method": "accel_set_options", 00:24:32.889 "params": { 00:24:32.889 "small_cache_size": 128, 00:24:32.889 "large_cache_size": 16, 00:24:32.889 "task_count": 2048, 00:24:32.889 "sequence_count": 2048, 00:24:32.889 "buf_count": 2048 00:24:32.889 } 00:24:32.889 } 00:24:32.889 ] 00:24:32.889 }, 00:24:32.889 { 00:24:32.889 "subsystem": "bdev", 00:24:32.889 "config": [ 00:24:32.889 { 00:24:32.889 "method": "bdev_set_options", 00:24:32.889 "params": { 00:24:32.889 "bdev_io_pool_size": 65535, 00:24:32.889 "bdev_io_cache_size": 256, 00:24:32.889 "bdev_auto_examine": true, 00:24:32.889 "iobuf_small_cache_size": 128, 00:24:32.889 "iobuf_large_cache_size": 16 00:24:32.889 } 00:24:32.889 }, 00:24:32.889 { 00:24:32.889 "method": "bdev_raid_set_options", 00:24:32.889 "params": { 00:24:32.889 "process_window_size_kb": 1024, 00:24:32.889 "process_max_bandwidth_mb_sec": 0 00:24:32.889 } 00:24:32.889 }, 00:24:32.889 { 00:24:32.889 "method": "bdev_iscsi_set_options", 00:24:32.889 "params": { 00:24:32.889 "timeout_sec": 30 00:24:32.889 } 00:24:32.889 }, 00:24:32.889 { 00:24:32.889 "method": "bdev_nvme_set_options", 00:24:32.889 "params": { 00:24:32.889 "action_on_timeout": "none", 00:24:32.889 "timeout_us": 0, 00:24:32.889 "timeout_admin_us": 0, 00:24:32.889 "keep_alive_timeout_ms": 10000, 00:24:32.889 "arbitration_burst": 0, 00:24:32.889 "low_priority_weight": 0, 00:24:32.889 "medium_priority_weight": 0, 00:24:32.889 "high_priority_weight": 0, 00:24:32.889 "nvme_adminq_poll_period_us": 10000, 00:24:32.889 "nvme_ioq_poll_period_us": 0, 00:24:32.889 "io_queue_requests": 512, 00:24:32.889 "delay_cmd_submit": true, 00:24:32.889 "transport_retry_count": 4, 00:24:32.889 "bdev_retry_count": 3, 00:24:32.889 "transport_ack_timeout": 0, 00:24:32.889 "ctrlr_loss_timeout_sec": 0, 00:24:32.889 "reconnect_delay_sec": 0, 00:24:32.889 "fast_io_fail_timeout_sec": 0, 00:24:32.889 "disable_auto_failback": false, 00:24:32.889 "generate_uuids": false, 00:24:32.889 "transport_tos": 0, 00:24:32.889 "nvme_error_stat": false, 00:24:32.889 "rdma_srq_size": 0, 00:24:32.889 "io_path_stat": false, 00:24:32.889 "allow_accel_sequence": false, 00:24:32.889 "rdma_max_cq_size": 0, 00:24:32.889 "rdma_cm_event_timeout_ms": 0, 00:24:32.889 "dhchap_digests": [ 00:24:32.889 "sha256", 00:24:32.889 "sha384", 00:24:32.889 "sha512" 00:24:32.889 ], 00:24:32.889 "dhchap_dhgroups": [ 00:24:32.889 "null", 00:24:32.889 "ffdhe2048", 00:24:32.889 "ffdhe3072", 00:24:32.889 "ffdhe4096", 00:24:32.889 "ffdhe6144", 00:24:32.889 "ffdhe8192" 00:24:32.889 ] 00:24:32.889 } 00:24:32.889 }, 00:24:32.889 { 00:24:32.889 "method": "bdev_nvme_attach_controller", 00:24:32.889 "params": { 00:24:32.889 "name": "nvme0", 00:24:32.889 "trtype": "TCP", 00:24:32.889 "adrfam": "IPv4", 00:24:32.889 "traddr": "10.0.0.2", 00:24:32.889 "trsvcid": "4420", 00:24:32.889 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:32.889 "prchk_reftag": false, 00:24:32.889 "prchk_guard": false, 00:24:32.889 "ctrlr_loss_timeout_sec": 0, 00:24:32.889 "reconnect_delay_sec": 0, 00:24:32.889 "fast_io_fail_timeout_sec": 0, 00:24:32.889 "psk": "key0", 00:24:32.889 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:32.889 "hdgst": false, 00:24:32.889 "ddgst": false, 00:24:32.889 "multipath": "multipath" 00:24:32.889 } 00:24:32.889 }, 00:24:32.889 { 00:24:32.889 "method": "bdev_nvme_set_hotplug", 00:24:32.889 "params": { 00:24:32.889 "period_us": 100000, 00:24:32.889 "enable": false 00:24:32.889 } 00:24:32.889 }, 00:24:32.889 { 00:24:32.889 "method": "bdev_enable_histogram", 00:24:32.889 "params": { 00:24:32.889 "name": "nvme0n1", 00:24:32.889 "enable": true 00:24:32.889 } 00:24:32.889 }, 00:24:32.889 { 00:24:32.889 "method": "bdev_wait_for_examine" 00:24:32.889 } 00:24:32.889 ] 00:24:32.889 }, 00:24:32.889 { 00:24:32.889 "subsystem": "nbd", 00:24:32.889 "config": [] 00:24:32.889 } 00:24:32.889 ] 00:24:32.889 }' 00:24:32.889 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3006737 00:24:32.889 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3006737 ']' 00:24:32.889 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3006737 00:24:32.889 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:32.889 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:32.889 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3006737 00:24:32.889 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:32.889 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:32.889 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3006737' 00:24:32.890 killing process with pid 3006737 00:24:32.890 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3006737 00:24:32.890 Received shutdown signal, test time was about 1.000000 seconds 00:24:32.890 00:24:32.890 Latency(us) 00:24:32.890 [2024-11-18T10:52:58.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.890 [2024-11-18T10:52:58.775Z] =================================================================================================================== 00:24:32.890 [2024-11-18T10:52:58.775Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:32.890 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3006737 00:24:33.820 11:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3006586 00:24:33.820 11:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3006586 ']' 00:24:33.820 11:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3006586 00:24:33.820 11:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:33.820 11:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:33.820 11:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3006586 00:24:33.820 11:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:33.820 11:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:33.820 11:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3006586' 00:24:33.820 killing process with pid 3006586 00:24:33.820 11:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3006586 00:24:33.820 11:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3006586 00:24:35.194 11:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:35.194 11:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:35.194 11:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:35.194 "subsystems": [ 00:24:35.194 { 00:24:35.194 "subsystem": "keyring", 00:24:35.194 "config": [ 00:24:35.194 { 00:24:35.194 "method": "keyring_file_add_key", 00:24:35.194 "params": { 00:24:35.194 "name": "key0", 00:24:35.194 "path": "/tmp/tmp.PXOrMVQVpQ" 00:24:35.194 } 00:24:35.194 } 00:24:35.194 ] 00:24:35.194 }, 00:24:35.194 { 00:24:35.194 "subsystem": "iobuf", 00:24:35.194 "config": [ 00:24:35.194 { 00:24:35.194 "method": "iobuf_set_options", 00:24:35.194 "params": { 00:24:35.194 "small_pool_count": 8192, 00:24:35.194 "large_pool_count": 1024, 00:24:35.194 "small_bufsize": 8192, 00:24:35.194 "large_bufsize": 135168, 00:24:35.194 "enable_numa": false 00:24:35.194 } 00:24:35.194 } 00:24:35.194 ] 00:24:35.194 }, 00:24:35.194 { 00:24:35.194 "subsystem": "sock", 00:24:35.194 "config": [ 00:24:35.194 { 00:24:35.194 "method": "sock_set_default_impl", 00:24:35.194 "params": { 00:24:35.194 "impl_name": "posix" 00:24:35.194 } 00:24:35.194 }, 00:24:35.194 { 00:24:35.194 "method": "sock_impl_set_options", 00:24:35.194 "params": { 00:24:35.194 "impl_name": "ssl", 00:24:35.194 "recv_buf_size": 4096, 00:24:35.194 "send_buf_size": 4096, 00:24:35.194 "enable_recv_pipe": true, 00:24:35.194 "enable_quickack": false, 00:24:35.194 "enable_placement_id": 0, 00:24:35.194 "enable_zerocopy_send_server": true, 00:24:35.194 "enable_zerocopy_send_client": false, 00:24:35.194 "zerocopy_threshold": 0, 00:24:35.194 "tls_version": 0, 00:24:35.194 "enable_ktls": false 00:24:35.194 } 00:24:35.194 }, 00:24:35.194 { 00:24:35.194 "method": "sock_impl_set_options", 00:24:35.194 "params": { 00:24:35.194 "impl_name": "posix", 00:24:35.194 "recv_buf_size": 2097152, 00:24:35.194 "send_buf_size": 2097152, 00:24:35.195 "enable_recv_pipe": true, 00:24:35.195 "enable_quickack": false, 00:24:35.195 "enable_placement_id": 0, 00:24:35.195 "enable_zerocopy_send_server": true, 00:24:35.195 "enable_zerocopy_send_client": false, 00:24:35.195 "zerocopy_threshold": 0, 00:24:35.195 "tls_version": 0, 00:24:35.195 "enable_ktls": false 00:24:35.195 } 00:24:35.195 } 00:24:35.195 ] 00:24:35.195 }, 00:24:35.195 { 00:24:35.195 "subsystem": "vmd", 00:24:35.195 "config": [] 00:24:35.195 }, 00:24:35.195 { 00:24:35.195 "subsystem": "accel", 00:24:35.195 "config": [ 00:24:35.195 { 00:24:35.195 "method": "accel_set_options", 00:24:35.195 "params": { 00:24:35.195 "small_cache_size": 128, 00:24:35.195 "large_cache_size": 16, 00:24:35.195 "task_count": 2048, 00:24:35.195 "sequence_count": 2048, 00:24:35.195 "buf_count": 2048 00:24:35.195 } 00:24:35.195 } 00:24:35.195 ] 00:24:35.195 }, 00:24:35.195 { 00:24:35.195 "subsystem": "bdev", 00:24:35.195 "config": [ 00:24:35.195 { 00:24:35.195 "method": "bdev_set_options", 00:24:35.195 "params": { 00:24:35.195 "bdev_io_pool_size": 65535, 00:24:35.195 "bdev_io_cache_size": 256, 00:24:35.195 "bdev_auto_examine": true, 00:24:35.195 "iobuf_small_cache_size": 128, 00:24:35.195 "iobuf_large_cache_size": 16 00:24:35.195 } 00:24:35.195 }, 00:24:35.195 { 00:24:35.195 "method": "bdev_raid_set_options", 00:24:35.195 "params": { 00:24:35.195 "process_window_size_kb": 1024, 00:24:35.195 "process_max_bandwidth_mb_sec": 0 00:24:35.195 } 00:24:35.195 }, 00:24:35.195 { 00:24:35.195 "method": "bdev_iscsi_set_options", 00:24:35.195 "params": { 00:24:35.195 "timeout_sec": 30 00:24:35.195 } 00:24:35.195 }, 00:24:35.195 { 00:24:35.195 "method": "bdev_nvme_set_options", 00:24:35.195 "params": { 00:24:35.195 "action_on_timeout": "none", 00:24:35.195 "timeout_us": 0, 00:24:35.195 "timeout_admin_us": 0, 00:24:35.195 "keep_alive_timeout_ms": 10000, 00:24:35.195 "arbitration_burst": 0, 00:24:35.195 "low_priority_weight": 0, 00:24:35.195 "medium_priority_weight": 0, 00:24:35.195 "high_priority_weight": 0, 00:24:35.195 "nvme_adminq_poll_period_us": 10000, 00:24:35.195 "nvme_ioq_poll_period_us": 0, 00:24:35.195 "io_queue_requests": 0, 00:24:35.195 "delay_cmd_submit": true, 00:24:35.195 "transport_retry_count": 4, 00:24:35.195 "bdev_retry_count": 3, 00:24:35.195 "transport_ack_timeout": 0, 00:24:35.195 "ctrlr_loss_timeout_sec": 0, 00:24:35.195 "reconnect_delay_sec": 0, 00:24:35.195 "fast_io_fail_timeout_sec": 0, 00:24:35.195 "disable_auto_failback": false, 00:24:35.195 "generate_uuids": false, 00:24:35.195 "transport_tos": 0, 00:24:35.195 "nvme_error_stat": false, 00:24:35.195 "rdma_srq_size": 0, 00:24:35.195 "io_path_stat": false, 00:24:35.195 "allow_accel_sequence": false, 00:24:35.195 "rdma_max_cq_size": 0, 00:24:35.195 "rdma_cm_event_timeout_ms": 0, 00:24:35.195 "dhchap_digests": [ 00:24:35.195 "sha256", 00:24:35.195 "sha384", 00:24:35.195 "sha512" 00:24:35.195 ], 00:24:35.195 "dhchap_dhgroups": [ 00:24:35.195 "null", 00:24:35.195 "ffdhe2048", 00:24:35.195 "ffdhe3072", 00:24:35.195 "ffdhe4096", 00:24:35.195 "ffdhe6144", 00:24:35.195 "ffdhe8192" 00:24:35.195 ] 00:24:35.195 } 00:24:35.195 }, 00:24:35.195 { 00:24:35.195 "method": "bdev_nvme_set_hotplug", 00:24:35.195 "params": { 00:24:35.195 "period_us": 100000, 00:24:35.195 "enable": false 00:24:35.195 } 00:24:35.195 }, 00:24:35.195 { 00:24:35.195 "method": "bdev_malloc_create", 00:24:35.195 "params": { 00:24:35.195 "name": "malloc0", 00:24:35.195 "num_blocks": 8192, 00:24:35.195 "block_size": 4096, 00:24:35.195 "physical_block_size": 4096, 00:24:35.195 "uuid": "7242ffa8-8288-4b1b-afe4-d9acf673e342", 00:24:35.195 "optimal_io_boundary": 0, 00:24:35.195 "md_size": 0, 00:24:35.195 "dif_type": 0, 00:24:35.195 "dif_is_head_of_md": false, 00:24:35.195 "dif_pi_format": 0 00:24:35.195 } 00:24:35.195 }, 00:24:35.195 { 00:24:35.195 "method": "bdev_wait_for_examine" 00:24:35.195 } 00:24:35.195 ] 00:24:35.195 }, 00:24:35.195 { 00:24:35.195 "subsystem": "nbd", 00:24:35.195 "config": [] 00:24:35.195 }, 00:24:35.195 { 00:24:35.195 "subsystem": "scheduler", 00:24:35.195 "config": [ 00:24:35.195 { 00:24:35.195 "method": "framework_set_scheduler", 00:24:35.195 "params": { 00:24:35.195 "name": "static" 00:24:35.195 } 00:24:35.195 } 00:24:35.195 ] 00:24:35.195 }, 00:24:35.195 { 00:24:35.195 "subsystem": "nvmf", 00:24:35.195 "config": [ 00:24:35.195 { 00:24:35.195 "method": "nvmf_set_config", 00:24:35.195 "params": { 00:24:35.195 "discovery_filter": "match_any", 00:24:35.195 "admin_cmd_passthru": { 00:24:35.195 "identify_ctrlr": false 00:24:35.195 }, 00:24:35.195 "dhchap_digests": [ 00:24:35.195 "sha256", 00:24:35.195 "sha384", 00:24:35.195 "sha512" 00:24:35.195 ], 00:24:35.195 "dhchap_dhgroups": [ 00:24:35.195 "null", 00:24:35.195 "ffdhe2048", 00:24:35.195 "ffdhe3072", 00:24:35.195 "ffdhe4096", 00:24:35.195 "ffdhe6144", 00:24:35.195 "ffdhe8192" 00:24:35.195 ] 00:24:35.195 } 00:24:35.195 }, 00:24:35.195 { 00:24:35.195 "method": "nvmf_set_max_subsystems", 00:24:35.195 "params": { 00:24:35.195 "max_subsystems": 1024 00:24:35.195 } 00:24:35.195 }, 00:24:35.195 { 00:24:35.195 "method": "nvmf_set_crdt", 00:24:35.195 "params": { 00:24:35.195 "crdt1": 0, 00:24:35.195 "crdt2": 0, 00:24:35.195 "crdt3": 0 00:24:35.195 } 00:24:35.195 }, 00:24:35.195 { 00:24:35.195 "method": "nvmf_create_transport", 00:24:35.195 "params": { 00:24:35.195 "trtype": "TCP", 00:24:35.195 "max_queue_depth": 128, 00:24:35.195 "max_io_qpairs_per_ctrlr": 127, 00:24:35.195 "in_capsule_data_size": 4096, 00:24:35.195 "max_io_size": 131072, 00:24:35.195 "io_unit_size": 131072, 00:24:35.195 "max_aq_depth": 128, 00:24:35.195 "num_shared_buffers": 511, 00:24:35.195 "buf_cache_size": 4294967295, 00:24:35.195 "dif_insert_or_strip": false, 00:24:35.195 "zcopy": false, 00:24:35.195 "c2h_success": false, 00:24:35.195 "sock_priority": 0, 00:24:35.195 "abort_timeout_sec": 1, 00:24:35.195 "ack_timeout": 0, 00:24:35.195 "data_wr_pool_size": 0 00:24:35.195 } 00:24:35.195 }, 00:24:35.195 { 00:24:35.195 "method": "nvmf_create_subsystem", 00:24:35.195 "params": { 00:24:35.195 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.195 "allow_any_host": false, 00:24:35.195 "serial_number": "00000000000000000000", 00:24:35.195 "model_number": "SPDK bdev Controller", 00:24:35.195 "max_namespaces": 32, 00:24:35.195 "min_cntlid": 1, 00:24:35.195 "max_cntlid": 65519, 00:24:35.195 "ana_reporting": false 00:24:35.195 } 00:24:35.195 }, 00:24:35.195 { 00:24:35.195 "method": "nvmf_subsystem_add_host", 00:24:35.195 "params": { 00:24:35.195 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.195 "host": "nqn.2016-06.io.spdk:host1", 00:24:35.195 "psk": "key0" 00:24:35.195 } 00:24:35.195 }, 00:24:35.195 { 00:24:35.195 "method": "nvmf_subsystem_add_ns", 00:24:35.195 "params": { 00:24:35.195 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.195 "namespace": { 00:24:35.195 "nsid": 1, 00:24:35.195 "bdev_name": "malloc0", 00:24:35.195 "nguid": "7242FFA882884B1BAFE4D9ACF673E342", 00:24:35.195 "uuid": "7242ffa8-8288-4b1b-afe4-d9acf673e342", 00:24:35.195 "no_auto_visible": false 00:24:35.195 } 00:24:35.195 } 00:24:35.195 }, 00:24:35.195 { 00:24:35.195 "method": "nvmf_subsystem_add_listener", 00:24:35.195 "params": { 00:24:35.195 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.195 "listen_address": { 00:24:35.195 "trtype": "TCP", 00:24:35.195 "adrfam": "IPv4", 00:24:35.195 "traddr": "10.0.0.2", 00:24:35.195 "trsvcid": "4420" 00:24:35.195 }, 00:24:35.195 "secure_channel": false, 00:24:35.195 "sock_impl": "ssl" 00:24:35.195 } 00:24:35.195 } 00:24:35.195 ] 00:24:35.195 } 00:24:35.195 ] 00:24:35.195 }' 00:24:35.195 11:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:35.195 11:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:35.195 11:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3007356 00:24:35.195 11:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:35.195 11:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3007356 00:24:35.195 11:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3007356 ']' 00:24:35.195 11:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:35.195 11:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:35.195 11:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:35.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:35.195 11:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:35.196 11:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:35.196 [2024-11-18 11:53:00.932448] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:35.196 [2024-11-18 11:53:00.932625] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:35.454 [2024-11-18 11:53:01.082388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.454 [2024-11-18 11:53:01.212991] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:35.454 [2024-11-18 11:53:01.213085] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:35.454 [2024-11-18 11:53:01.213111] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:35.454 [2024-11-18 11:53:01.213135] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:35.454 [2024-11-18 11:53:01.213164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:35.454 [2024-11-18 11:53:01.214933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.019 [2024-11-18 11:53:01.757188] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:36.019 [2024-11-18 11:53:01.789218] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:36.019 [2024-11-18 11:53:01.789538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:36.019 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:36.019 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:36.019 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:36.019 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:36.019 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:36.277 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:36.277 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3007461 00:24:36.277 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3007461 /var/tmp/bdevperf.sock 00:24:36.277 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3007461 ']' 00:24:36.277 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:36.277 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:36.277 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:36.277 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:36.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:36.277 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:36.277 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:36.277 "subsystems": [ 00:24:36.277 { 00:24:36.277 "subsystem": "keyring", 00:24:36.277 "config": [ 00:24:36.277 { 00:24:36.277 "method": "keyring_file_add_key", 00:24:36.277 "params": { 00:24:36.277 "name": "key0", 00:24:36.277 "path": "/tmp/tmp.PXOrMVQVpQ" 00:24:36.277 } 00:24:36.277 } 00:24:36.277 ] 00:24:36.277 }, 00:24:36.277 { 00:24:36.277 "subsystem": "iobuf", 00:24:36.277 "config": [ 00:24:36.277 { 00:24:36.277 "method": "iobuf_set_options", 00:24:36.277 "params": { 00:24:36.277 "small_pool_count": 8192, 00:24:36.277 "large_pool_count": 1024, 00:24:36.277 "small_bufsize": 8192, 00:24:36.277 "large_bufsize": 135168, 00:24:36.277 "enable_numa": false 00:24:36.277 } 00:24:36.277 } 00:24:36.277 ] 00:24:36.277 }, 00:24:36.277 { 00:24:36.277 "subsystem": "sock", 00:24:36.277 "config": [ 00:24:36.277 { 00:24:36.277 "method": "sock_set_default_impl", 00:24:36.277 "params": { 00:24:36.277 "impl_name": "posix" 00:24:36.277 } 00:24:36.277 }, 00:24:36.277 { 00:24:36.277 "method": "sock_impl_set_options", 00:24:36.277 "params": { 00:24:36.277 "impl_name": "ssl", 00:24:36.277 "recv_buf_size": 4096, 00:24:36.277 "send_buf_size": 4096, 00:24:36.277 "enable_recv_pipe": true, 00:24:36.277 "enable_quickack": false, 00:24:36.277 "enable_placement_id": 0, 00:24:36.277 "enable_zerocopy_send_server": true, 00:24:36.277 "enable_zerocopy_send_client": false, 00:24:36.277 "zerocopy_threshold": 0, 00:24:36.277 "tls_version": 0, 00:24:36.277 "enable_ktls": false 00:24:36.277 } 00:24:36.277 }, 00:24:36.277 { 00:24:36.277 "method": "sock_impl_set_options", 00:24:36.277 "params": { 00:24:36.277 "impl_name": "posix", 00:24:36.277 "recv_buf_size": 2097152, 00:24:36.277 "send_buf_size": 2097152, 00:24:36.277 "enable_recv_pipe": true, 00:24:36.278 "enable_quickack": false, 00:24:36.278 "enable_placement_id": 0, 00:24:36.278 "enable_zerocopy_send_server": true, 00:24:36.278 "enable_zerocopy_send_client": false, 00:24:36.278 "zerocopy_threshold": 0, 00:24:36.278 "tls_version": 0, 00:24:36.278 "enable_ktls": false 00:24:36.278 } 00:24:36.278 } 00:24:36.278 ] 00:24:36.278 }, 00:24:36.278 { 00:24:36.278 "subsystem": "vmd", 00:24:36.278 "config": [] 00:24:36.278 }, 00:24:36.278 { 00:24:36.278 "subsystem": "accel", 00:24:36.278 "config": [ 00:24:36.278 { 00:24:36.278 "method": "accel_set_options", 00:24:36.278 "params": { 00:24:36.278 "small_cache_size": 128, 00:24:36.278 "large_cache_size": 16, 00:24:36.278 "task_count": 2048, 00:24:36.278 "sequence_count": 2048, 00:24:36.278 "buf_count": 2048 00:24:36.278 } 00:24:36.278 } 00:24:36.278 ] 00:24:36.278 }, 00:24:36.278 { 00:24:36.278 "subsystem": "bdev", 00:24:36.278 "config": [ 00:24:36.278 { 00:24:36.278 "method": "bdev_set_options", 00:24:36.278 "params": { 00:24:36.278 "bdev_io_pool_size": 65535, 00:24:36.278 "bdev_io_cache_size": 256, 00:24:36.278 "bdev_auto_examine": true, 00:24:36.278 "iobuf_small_cache_size": 128, 00:24:36.278 "iobuf_large_cache_size": 16 00:24:36.278 } 00:24:36.278 }, 00:24:36.278 { 00:24:36.278 "method": "bdev_raid_set_options", 00:24:36.278 "params": { 00:24:36.278 "process_window_size_kb": 1024, 00:24:36.278 "process_max_bandwidth_mb_sec": 0 00:24:36.278 } 00:24:36.278 }, 00:24:36.278 { 00:24:36.278 "method": "bdev_iscsi_set_options", 00:24:36.278 "params": { 00:24:36.278 "timeout_sec": 30 00:24:36.278 } 00:24:36.278 }, 00:24:36.278 { 00:24:36.278 "method": "bdev_nvme_set_options", 00:24:36.278 "params": { 00:24:36.278 "action_on_timeout": "none", 00:24:36.278 "timeout_us": 0, 00:24:36.278 "timeout_admin_us": 0, 00:24:36.278 "keep_alive_timeout_ms": 10000, 00:24:36.278 "arbitration_burst": 0, 00:24:36.278 "low_priority_weight": 0, 00:24:36.278 "medium_priority_weight": 0, 00:24:36.278 "high_priority_weight": 0, 00:24:36.278 "nvme_adminq_poll_period_us": 10000, 00:24:36.278 "nvme_ioq_poll_period_us": 0, 00:24:36.278 "io_queue_requests": 512, 00:24:36.278 "delay_cmd_submit": true, 00:24:36.278 "transport_retry_count": 4, 00:24:36.278 "bdev_retry_count": 3, 00:24:36.278 "transport_ack_timeout": 0, 00:24:36.278 "ctrlr_loss_timeout_sec": 0, 00:24:36.278 "reconnect_delay_sec": 0, 00:24:36.278 "fast_io_fail_timeout_sec": 0, 00:24:36.278 "disable_auto_failback": false, 00:24:36.278 "generate_uuids": false, 00:24:36.278 "transport_tos": 0, 00:24:36.278 "nvme_error_stat": false, 00:24:36.278 "rdma_srq_size": 0, 00:24:36.278 "io_path_stat": false, 00:24:36.278 "allow_accel_sequence": false, 00:24:36.278 "rdma_max_cq_size": 0, 00:24:36.278 "rdma_cm_event_timeout_ms": 0, 00:24:36.278 "dhchap_digests": [ 00:24:36.278 "sha256", 00:24:36.278 "sha384", 00:24:36.278 "sha512" 00:24:36.278 ], 00:24:36.278 "dhchap_dhgroups": [ 00:24:36.278 "null", 00:24:36.278 "ffdhe2048", 00:24:36.278 "ffdhe3072", 00:24:36.278 "ffdhe4096", 00:24:36.278 "ffdhe6144", 00:24:36.278 "ffdhe8192" 00:24:36.278 ] 00:24:36.278 } 00:24:36.278 }, 00:24:36.278 { 00:24:36.278 "method": "bdev_nvme_attach_controller", 00:24:36.278 "params": { 00:24:36.278 "name": "nvme0", 00:24:36.278 "trtype": "TCP", 00:24:36.278 "adrfam": "IPv4", 00:24:36.278 "traddr": "10.0.0.2", 00:24:36.278 "trsvcid": "4420", 00:24:36.278 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:36.278 "prchk_reftag": false, 00:24:36.278 "prchk_guard": false, 00:24:36.278 "ctrlr_loss_timeout_sec": 0, 00:24:36.278 "reconnect_delay_sec": 0, 00:24:36.278 "fast_io_fail_timeout_sec": 0, 00:24:36.278 "psk": "key0", 00:24:36.278 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:36.278 "hdgst": false, 00:24:36.278 "ddgst": false, 00:24:36.278 "multipath": "multipath" 00:24:36.278 } 00:24:36.278 }, 00:24:36.278 { 00:24:36.278 "method": "bdev_nvme_set_hotplug", 00:24:36.278 "params": { 00:24:36.278 "period_us": 100000, 00:24:36.278 "enable": false 00:24:36.278 } 00:24:36.278 }, 00:24:36.278 { 00:24:36.278 "method": "bdev_enable_histogram", 00:24:36.278 "params": { 00:24:36.278 "name": "nvme0n1", 00:24:36.278 "enable": true 00:24:36.278 } 00:24:36.278 }, 00:24:36.278 { 00:24:36.278 "method": "bdev_wait_for_examine" 00:24:36.278 } 00:24:36.278 ] 00:24:36.278 }, 00:24:36.278 { 00:24:36.278 "subsystem": "nbd", 00:24:36.278 "config": [] 00:24:36.278 } 00:24:36.278 ] 00:24:36.278 }' 00:24:36.278 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:36.278 [2024-11-18 11:53:02.002620] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:36.278 [2024-11-18 11:53:02.002769] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3007461 ] 00:24:36.537 [2024-11-18 11:53:02.172834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.537 [2024-11-18 11:53:02.327175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:37.103 [2024-11-18 11:53:02.731944] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:37.362 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:37.362 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:37.362 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:37.362 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:37.619 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.619 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:37.619 Running I/O for 1 seconds... 00:24:38.992 2519.00 IOPS, 9.84 MiB/s 00:24:38.992 Latency(us) 00:24:38.992 [2024-11-18T10:53:04.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.992 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:38.992 Verification LBA range: start 0x0 length 0x2000 00:24:38.992 nvme0n1 : 1.03 2563.00 10.01 0.00 0.00 49175.02 9854.67 38447.79 00:24:38.992 [2024-11-18T10:53:04.877Z] =================================================================================================================== 00:24:38.992 [2024-11-18T10:53:04.877Z] Total : 2563.00 10.01 0.00 0.00 49175.02 9854.67 38447.79 00:24:38.992 { 00:24:38.992 "results": [ 00:24:38.992 { 00:24:38.992 "job": "nvme0n1", 00:24:38.992 "core_mask": "0x2", 00:24:38.992 "workload": "verify", 00:24:38.992 "status": "finished", 00:24:38.992 "verify_range": { 00:24:38.992 "start": 0, 00:24:38.992 "length": 8192 00:24:38.992 }, 00:24:38.992 "queue_depth": 128, 00:24:38.992 "io_size": 4096, 00:24:38.992 "runtime": 1.032773, 00:24:38.992 "iops": 2563.0027121158278, 00:24:38.992 "mibps": 10.011729344202452, 00:24:38.992 "io_failed": 0, 00:24:38.992 "io_timeout": 0, 00:24:38.992 "avg_latency_us": 49175.01632316109, 00:24:38.992 "min_latency_us": 9854.672592592593, 00:24:38.992 "max_latency_us": 38447.78666666667 00:24:38.992 } 00:24:38.992 ], 00:24:38.992 "core_count": 1 00:24:38.992 } 00:24:38.992 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:38.992 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:38.992 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:38.992 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:24:38.992 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:24:38.992 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:38.992 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:38.992 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:38.992 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:38.992 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:38.992 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:38.992 nvmf_trace.0 00:24:38.992 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:24:38.992 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3007461 00:24:38.992 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3007461 ']' 00:24:38.992 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3007461 00:24:38.992 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:38.993 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:38.993 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3007461 00:24:38.993 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:38.993 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:38.993 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3007461' 00:24:38.993 killing process with pid 3007461 00:24:38.993 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3007461 00:24:38.993 Received shutdown signal, test time was about 1.000000 seconds 00:24:38.993 00:24:38.993 Latency(us) 00:24:38.993 [2024-11-18T10:53:04.878Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.993 [2024-11-18T10:53:04.878Z] =================================================================================================================== 00:24:38.993 [2024-11-18T10:53:04.878Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:38.993 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3007461 00:24:39.928 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:39.928 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:39.928 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:39.928 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:39.928 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:39.928 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:39.928 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:39.928 rmmod nvme_tcp 00:24:39.928 rmmod nvme_fabrics 00:24:39.928 rmmod nvme_keyring 00:24:39.928 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:39.928 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:39.928 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:39.928 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3007356 ']' 00:24:39.928 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3007356 00:24:39.928 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3007356 ']' 00:24:39.928 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3007356 00:24:39.928 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:39.928 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:39.928 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3007356 00:24:39.928 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:39.928 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:39.928 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3007356' 00:24:39.928 killing process with pid 3007356 00:24:39.928 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3007356 00:24:39.928 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3007356 00:24:41.303 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:41.303 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:41.303 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:41.303 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:41.303 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:24:41.303 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:41.303 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:24:41.303 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:41.303 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:41.303 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.303 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:41.303 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.205 11:53:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:43.205 11:53:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.3XSUKJSDjw /tmp/tmp.jhs9fwUc5R /tmp/tmp.PXOrMVQVpQ 00:24:43.205 00:24:43.205 real 1m53.150s 00:24:43.205 user 3m11.016s 00:24:43.205 sys 0m25.881s 00:24:43.205 11:53:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:43.205 11:53:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:43.205 ************************************ 00:24:43.205 END TEST nvmf_tls 00:24:43.205 ************************************ 00:24:43.205 11:53:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:43.205 11:53:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:43.205 11:53:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:43.205 11:53:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:43.205 ************************************ 00:24:43.205 START TEST nvmf_fips 00:24:43.205 ************************************ 00:24:43.205 11:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:43.205 * Looking for test storage... 00:24:43.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:43.205 11:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:43.205 11:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:24:43.205 11:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:43.205 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:43.205 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:43.205 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:43.205 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:43.205 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:43.205 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:43.205 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:43.205 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:43.205 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:43.205 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:43.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.206 --rc genhtml_branch_coverage=1 00:24:43.206 --rc genhtml_function_coverage=1 00:24:43.206 --rc genhtml_legend=1 00:24:43.206 --rc geninfo_all_blocks=1 00:24:43.206 --rc geninfo_unexecuted_blocks=1 00:24:43.206 00:24:43.206 ' 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:43.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.206 --rc genhtml_branch_coverage=1 00:24:43.206 --rc genhtml_function_coverage=1 00:24:43.206 --rc genhtml_legend=1 00:24:43.206 --rc geninfo_all_blocks=1 00:24:43.206 --rc geninfo_unexecuted_blocks=1 00:24:43.206 00:24:43.206 ' 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:43.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.206 --rc genhtml_branch_coverage=1 00:24:43.206 --rc genhtml_function_coverage=1 00:24:43.206 --rc genhtml_legend=1 00:24:43.206 --rc geninfo_all_blocks=1 00:24:43.206 --rc geninfo_unexecuted_blocks=1 00:24:43.206 00:24:43.206 ' 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:43.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.206 --rc genhtml_branch_coverage=1 00:24:43.206 --rc genhtml_function_coverage=1 00:24:43.206 --rc genhtml_legend=1 00:24:43.206 --rc geninfo_all_blocks=1 00:24:43.206 --rc geninfo_unexecuted_blocks=1 00:24:43.206 00:24:43.206 ' 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:43.206 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:43.206 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:43.207 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:43.207 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:43.207 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:43.207 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:43.207 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:43.207 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:43.207 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:43.207 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:43.207 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:43.207 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:43.207 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:43.207 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:43.207 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:43.207 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:43.207 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:43.207 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:43.207 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:43.207 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:43.207 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:43.207 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:43.207 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:43.207 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:43.207 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:43.207 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:43.207 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:43.207 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:43.207 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:43.207 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:43.207 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:43.207 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:24:43.465 Error setting digest 00:24:43.465 40B28BCCDD7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:43.465 40B28BCCDD7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:43.465 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:45.366 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:45.366 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:45.366 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:45.366 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:45.366 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:45.625 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:45.625 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:45.625 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:45.625 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:45.625 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:45.625 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:45.625 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:45.625 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:45.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:45.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:24:45.625 00:24:45.625 --- 10.0.0.2 ping statistics --- 00:24:45.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.625 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:24:45.625 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:45.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:45.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:24:45.625 00:24:45.625 --- 10.0.0.1 ping statistics --- 00:24:45.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.625 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:24:45.625 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:45.625 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:24:45.625 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:45.625 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:45.625 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:45.625 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:45.625 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:45.625 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:45.625 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:45.625 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:45.625 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:45.625 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:45.625 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:45.625 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3010077 00:24:45.625 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:45.625 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3010077 00:24:45.625 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3010077 ']' 00:24:45.625 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.625 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:45.625 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.625 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:45.625 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:45.625 [2024-11-18 11:53:11.496248] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:45.625 [2024-11-18 11:53:11.496413] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:45.883 [2024-11-18 11:53:11.664356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.143 [2024-11-18 11:53:11.802830] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:46.143 [2024-11-18 11:53:11.802923] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:46.143 [2024-11-18 11:53:11.802948] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:46.143 [2024-11-18 11:53:11.802973] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:46.143 [2024-11-18 11:53:11.802993] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:46.143 [2024-11-18 11:53:11.804656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:46.773 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:46.773 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:46.773 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:46.773 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:46.773 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:46.773 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:46.773 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:46.773 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:46.773 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:46.773 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.fQO 00:24:46.773 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:46.773 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.fQO 00:24:46.773 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.fQO 00:24:46.773 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.fQO 00:24:46.773 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:47.032 [2024-11-18 11:53:12.669533] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:47.032 [2024-11-18 11:53:12.685460] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:47.032 [2024-11-18 11:53:12.685799] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:47.032 malloc0 00:24:47.032 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:47.032 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3010236 00:24:47.032 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:47.032 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3010236 /var/tmp/bdevperf.sock 00:24:47.032 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3010236 ']' 00:24:47.032 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:47.032 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:47.032 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:47.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:47.032 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:47.032 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:47.032 [2024-11-18 11:53:12.895472] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:47.032 [2024-11-18 11:53:12.895643] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3010236 ] 00:24:47.290 [2024-11-18 11:53:13.032191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.290 [2024-11-18 11:53:13.152544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:48.223 11:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:48.223 11:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:48.223 11:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.fQO 00:24:48.223 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:48.480 [2024-11-18 11:53:14.324835] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:48.737 TLSTESTn1 00:24:48.737 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:48.737 Running I/O for 10 seconds... 00:24:51.040 2625.00 IOPS, 10.25 MiB/s [2024-11-18T10:53:17.858Z] 2668.50 IOPS, 10.42 MiB/s [2024-11-18T10:53:18.791Z] 2687.00 IOPS, 10.50 MiB/s [2024-11-18T10:53:19.736Z] 2697.25 IOPS, 10.54 MiB/s [2024-11-18T10:53:20.669Z] 2698.80 IOPS, 10.54 MiB/s [2024-11-18T10:53:21.601Z] 2700.67 IOPS, 10.55 MiB/s [2024-11-18T10:53:22.972Z] 2706.00 IOPS, 10.57 MiB/s [2024-11-18T10:53:23.905Z] 2707.88 IOPS, 10.58 MiB/s [2024-11-18T10:53:24.839Z] 2707.56 IOPS, 10.58 MiB/s [2024-11-18T10:53:24.839Z] 2707.30 IOPS, 10.58 MiB/s 00:24:58.954 Latency(us) 00:24:58.954 [2024-11-18T10:53:24.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.954 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:58.954 Verification LBA range: start 0x0 length 0x2000 00:24:58.954 TLSTESTn1 : 10.03 2712.59 10.60 0.00 0.00 47097.43 9417.77 45826.65 00:24:58.954 [2024-11-18T10:53:24.839Z] =================================================================================================================== 00:24:58.954 [2024-11-18T10:53:24.839Z] Total : 2712.59 10.60 0.00 0.00 47097.43 9417.77 45826.65 00:24:58.954 { 00:24:58.954 "results": [ 00:24:58.954 { 00:24:58.954 "job": "TLSTESTn1", 00:24:58.954 "core_mask": "0x4", 00:24:58.954 "workload": "verify", 00:24:58.954 "status": "finished", 00:24:58.954 "verify_range": { 00:24:58.954 "start": 0, 00:24:58.954 "length": 8192 00:24:58.954 }, 00:24:58.954 "queue_depth": 128, 00:24:58.954 "io_size": 4096, 00:24:58.954 "runtime": 10.026963, 00:24:58.954 "iops": 2712.586054222001, 00:24:58.954 "mibps": 10.596039274304692, 00:24:58.954 "io_failed": 0, 00:24:58.954 "io_timeout": 0, 00:24:58.954 "avg_latency_us": 47097.429761061474, 00:24:58.954 "min_latency_us": 9417.765925925925, 00:24:58.954 "max_latency_us": 45826.654814814814 00:24:58.954 } 00:24:58.954 ], 00:24:58.954 "core_count": 1 00:24:58.954 } 00:24:58.954 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:58.954 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:58.954 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:24:58.954 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:24:58.954 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:58.954 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:58.954 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:58.954 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:58.954 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:58.954 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:58.954 nvmf_trace.0 00:24:58.954 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:24:58.954 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3010236 00:24:58.954 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3010236 ']' 00:24:58.954 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3010236 00:24:58.954 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:58.954 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:58.954 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3010236 00:24:58.954 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:58.954 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:58.954 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3010236' 00:24:58.954 killing process with pid 3010236 00:24:58.954 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3010236 00:24:58.954 Received shutdown signal, test time was about 10.000000 seconds 00:24:58.954 00:24:58.954 Latency(us) 00:24:58.954 [2024-11-18T10:53:24.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.954 [2024-11-18T10:53:24.840Z] =================================================================================================================== 00:24:58.955 [2024-11-18T10:53:24.840Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:58.955 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3010236 00:24:59.888 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:59.888 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:59.888 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:59.888 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:59.888 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:59.888 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:59.888 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:59.888 rmmod nvme_tcp 00:24:59.888 rmmod nvme_fabrics 00:24:59.888 rmmod nvme_keyring 00:24:59.888 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:59.888 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:59.888 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:59.888 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3010077 ']' 00:24:59.888 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3010077 00:24:59.888 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3010077 ']' 00:24:59.888 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3010077 00:24:59.888 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:59.888 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:59.888 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3010077 00:24:59.888 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:59.888 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:59.888 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3010077' 00:24:59.888 killing process with pid 3010077 00:24:59.888 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3010077 00:24:59.888 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3010077 00:25:01.262 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:01.262 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:01.262 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:01.262 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:25:01.262 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:25:01.262 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:01.262 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:25:01.262 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:01.262 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:01.262 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.262 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:01.262 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.164 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:03.164 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.fQO 00:25:03.164 00:25:03.164 real 0m20.058s 00:25:03.164 user 0m27.730s 00:25:03.164 sys 0m5.111s 00:25:03.164 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:03.164 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:03.164 ************************************ 00:25:03.164 END TEST nvmf_fips 00:25:03.164 ************************************ 00:25:03.164 11:53:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:03.164 11:53:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:03.164 11:53:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:03.164 11:53:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:03.164 ************************************ 00:25:03.164 START TEST nvmf_control_msg_list 00:25:03.164 ************************************ 00:25:03.164 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:03.431 * Looking for test storage... 00:25:03.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:03.431 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:03.432 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:25:03.432 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:03.432 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:03.432 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:03.432 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:03.432 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:03.432 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:25:03.432 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:25:03.432 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:25:03.432 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:25:03.432 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:25:03.432 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:25:03.432 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:25:03.432 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:03.432 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:25:03.432 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:25:03.432 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:03.432 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:03.432 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:25:03.432 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:25:03.432 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:03.432 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:25:03.432 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:25:03.432 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:25:03.433 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:25:03.433 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:03.433 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:25:03.433 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:25:03.433 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:03.433 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:03.433 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:25:03.433 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:03.433 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:03.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.433 --rc genhtml_branch_coverage=1 00:25:03.433 --rc genhtml_function_coverage=1 00:25:03.433 --rc genhtml_legend=1 00:25:03.433 --rc geninfo_all_blocks=1 00:25:03.433 --rc geninfo_unexecuted_blocks=1 00:25:03.433 00:25:03.433 ' 00:25:03.433 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:03.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.433 --rc genhtml_branch_coverage=1 00:25:03.433 --rc genhtml_function_coverage=1 00:25:03.433 --rc genhtml_legend=1 00:25:03.433 --rc geninfo_all_blocks=1 00:25:03.433 --rc geninfo_unexecuted_blocks=1 00:25:03.433 00:25:03.433 ' 00:25:03.433 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:03.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.433 --rc genhtml_branch_coverage=1 00:25:03.433 --rc genhtml_function_coverage=1 00:25:03.433 --rc genhtml_legend=1 00:25:03.433 --rc geninfo_all_blocks=1 00:25:03.433 --rc geninfo_unexecuted_blocks=1 00:25:03.433 00:25:03.433 ' 00:25:03.433 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:03.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.433 --rc genhtml_branch_coverage=1 00:25:03.433 --rc genhtml_function_coverage=1 00:25:03.434 --rc genhtml_legend=1 00:25:03.434 --rc geninfo_all_blocks=1 00:25:03.434 --rc geninfo_unexecuted_blocks=1 00:25:03.434 00:25:03.434 ' 00:25:03.434 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:03.434 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:25:03.434 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.434 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.434 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.434 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.434 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.434 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.434 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.434 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.437 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.437 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.437 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:03.438 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:03.438 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.438 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.438 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:03.438 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:03.438 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:03.438 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:25:03.438 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.438 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.438 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.438 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.438 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.438 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.438 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:25:03.438 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.438 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:25:03.438 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:03.438 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:03.438 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:03.438 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.438 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.438 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:03.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:03.438 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:03.439 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:03.439 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:03.439 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:25:03.439 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:03.439 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.439 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:03.439 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:03.439 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:03.439 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.439 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.439 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.439 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:03.439 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:03.439 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:25:03.439 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:05.341 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:05.341 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:25:05.341 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:05.341 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:05.341 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:05.341 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:05.341 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:05.341 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:25:05.341 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:05.341 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:25:05.341 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:25:05.341 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:25:05.341 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:25:05.341 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:25:05.341 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:25:05.341 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.341 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:05.342 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:05.342 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:05.342 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:05.342 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:05.342 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:05.601 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:05.601 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:05.601 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:05.601 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:05.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:05.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:25:05.601 00:25:05.601 --- 10.0.0.2 ping statistics --- 00:25:05.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.601 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:25:05.601 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:05.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:05.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:25:05.601 00:25:05.601 --- 10.0.0.1 ping statistics --- 00:25:05.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.601 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:25:05.601 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:05.601 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:25:05.601 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:05.601 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:05.601 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:05.601 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:05.601 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:05.601 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:05.601 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:05.602 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:25:05.602 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:05.602 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:05.602 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:05.602 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3013767 00:25:05.602 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:05.602 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3013767 00:25:05.602 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 3013767 ']' 00:25:05.602 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.602 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:05.602 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.602 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:05.602 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:05.602 [2024-11-18 11:53:31.390475] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:25:05.602 [2024-11-18 11:53:31.390629] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:05.860 [2024-11-18 11:53:31.533829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.860 [2024-11-18 11:53:31.662806] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:05.860 [2024-11-18 11:53:31.662911] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:05.860 [2024-11-18 11:53:31.662937] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:05.860 [2024-11-18 11:53:31.662961] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:05.860 [2024-11-18 11:53:31.662980] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:05.860 [2024-11-18 11:53:31.664650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.795 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:06.795 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:25:06.795 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:06.795 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:06.795 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:06.795 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:06.795 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:06.795 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:06.795 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:25:06.795 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.795 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:06.795 [2024-11-18 11:53:32.405374] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:06.795 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.795 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:25:06.795 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.795 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:06.795 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.795 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:06.795 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.795 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:06.795 Malloc0 00:25:06.795 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.795 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:06.795 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.795 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:06.795 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.795 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:06.795 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.795 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:06.795 [2024-11-18 11:53:32.476592] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:06.795 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.795 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3013919 00:25:06.795 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:06.795 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3013920 00:25:06.795 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:06.795 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3013921 00:25:06.795 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3013919 00:25:06.795 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:06.795 [2024-11-18 11:53:32.607267] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:06.795 [2024-11-18 11:53:32.607739] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:06.795 [2024-11-18 11:53:32.608148] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:08.168 Initializing NVMe Controllers 00:25:08.168 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:08.168 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:25:08.168 Initialization complete. Launching workers. 00:25:08.168 ======================================================== 00:25:08.168 Latency(us) 00:25:08.168 Device Information : IOPS MiB/s Average min max 00:25:08.168 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3191.00 12.46 312.77 220.50 1033.28 00:25:08.168 ======================================================== 00:25:08.168 Total : 3191.00 12.46 312.77 220.50 1033.28 00:25:08.168 00:25:08.168 Initializing NVMe Controllers 00:25:08.168 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:08.168 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:25:08.168 Initialization complete. Launching workers. 00:25:08.168 ======================================================== 00:25:08.168 Latency(us) 00:25:08.168 Device Information : IOPS MiB/s Average min max 00:25:08.168 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40891.20 40626.87 41074.58 00:25:08.168 ======================================================== 00:25:08.168 Total : 25.00 0.10 40891.20 40626.87 41074.58 00:25:08.168 00:25:08.168 Initializing NVMe Controllers 00:25:08.168 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:08.168 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:25:08.168 Initialization complete. Launching workers. 00:25:08.168 ======================================================== 00:25:08.168 Latency(us) 00:25:08.168 Device Information : IOPS MiB/s Average min max 00:25:08.168 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3105.00 12.13 321.47 225.21 745.59 00:25:08.168 ======================================================== 00:25:08.168 Total : 3105.00 12.13 321.47 225.21 745.59 00:25:08.168 00:25:08.168 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3013920 00:25:08.168 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3013921 00:25:08.168 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:08.168 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:25:08.168 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:08.168 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:25:08.168 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:08.168 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:25:08.168 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:08.168 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:08.168 rmmod nvme_tcp 00:25:08.168 rmmod nvme_fabrics 00:25:08.168 rmmod nvme_keyring 00:25:08.168 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:08.168 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:25:08.168 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:25:08.168 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3013767 ']' 00:25:08.168 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3013767 00:25:08.168 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 3013767 ']' 00:25:08.168 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 3013767 00:25:08.168 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:25:08.168 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:08.168 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3013767 00:25:08.168 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:08.168 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:08.168 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3013767' 00:25:08.168 killing process with pid 3013767 00:25:08.168 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 3013767 00:25:08.168 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 3013767 00:25:09.543 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:09.543 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:09.543 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:09.543 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:25:09.543 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:25:09.543 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:09.543 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:25:09.543 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:09.543 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:09.544 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.544 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:09.544 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.448 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:11.448 00:25:11.448 real 0m8.181s 00:25:11.448 user 0m7.715s 00:25:11.448 sys 0m2.761s 00:25:11.448 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:11.448 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:11.448 ************************************ 00:25:11.448 END TEST nvmf_control_msg_list 00:25:11.448 ************************************ 00:25:11.448 11:53:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:11.448 11:53:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:11.448 11:53:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:11.448 11:53:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:11.448 ************************************ 00:25:11.448 START TEST nvmf_wait_for_buf 00:25:11.448 ************************************ 00:25:11.448 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:11.448 * Looking for test storage... 00:25:11.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:11.448 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:11.448 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:25:11.448 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:11.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.710 --rc genhtml_branch_coverage=1 00:25:11.710 --rc genhtml_function_coverage=1 00:25:11.710 --rc genhtml_legend=1 00:25:11.710 --rc geninfo_all_blocks=1 00:25:11.710 --rc geninfo_unexecuted_blocks=1 00:25:11.710 00:25:11.710 ' 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:11.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.710 --rc genhtml_branch_coverage=1 00:25:11.710 --rc genhtml_function_coverage=1 00:25:11.710 --rc genhtml_legend=1 00:25:11.710 --rc geninfo_all_blocks=1 00:25:11.710 --rc geninfo_unexecuted_blocks=1 00:25:11.710 00:25:11.710 ' 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:11.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.710 --rc genhtml_branch_coverage=1 00:25:11.710 --rc genhtml_function_coverage=1 00:25:11.710 --rc genhtml_legend=1 00:25:11.710 --rc geninfo_all_blocks=1 00:25:11.710 --rc geninfo_unexecuted_blocks=1 00:25:11.710 00:25:11.710 ' 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:11.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.710 --rc genhtml_branch_coverage=1 00:25:11.710 --rc genhtml_function_coverage=1 00:25:11.710 --rc genhtml_legend=1 00:25:11.710 --rc geninfo_all_blocks=1 00:25:11.710 --rc geninfo_unexecuted_blocks=1 00:25:11.710 00:25:11.710 ' 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:11.710 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:11.710 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:25:11.711 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:11.711 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:11.711 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:11.711 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:11.711 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:11.711 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.711 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:11.711 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.711 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:11.711 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:11.711 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:11.711 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:13.609 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:13.609 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:13.609 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:13.868 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:13.868 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:13.868 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:13.868 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:13.869 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:13.869 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:13.869 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:25:13.869 00:25:13.869 --- 10.0.0.2 ping statistics --- 00:25:13.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.869 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:13.869 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:13.869 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:25:13.869 00:25:13.869 --- 10.0.0.1 ping statistics --- 00:25:13.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.869 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3016131 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3016131 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 3016131 ']' 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:13.869 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:14.127 [2024-11-18 11:53:39.754699] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:25:14.128 [2024-11-18 11:53:39.754846] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:14.128 [2024-11-18 11:53:39.908455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.386 [2024-11-18 11:53:40.050264] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:14.386 [2024-11-18 11:53:40.050343] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:14.386 [2024-11-18 11:53:40.050369] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:14.386 [2024-11-18 11:53:40.050394] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:14.386 [2024-11-18 11:53:40.050414] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:14.386 [2024-11-18 11:53:40.052109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.952 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:14.952 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:25:14.952 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:14.952 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:14.952 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:14.952 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:14.952 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:14.952 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:14.952 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:14.953 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.953 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:14.953 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.953 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:14.953 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.953 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:14.953 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.953 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:14.953 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.953 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:15.275 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.275 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:15.275 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.275 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:15.275 Malloc0 00:25:15.275 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.275 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:15.275 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.275 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:15.275 [2024-11-18 11:53:41.084671] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:15.275 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.275 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:15.275 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.275 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:15.275 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.275 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:15.275 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.275 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:15.275 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.275 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:15.275 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.275 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:15.275 [2024-11-18 11:53:41.108959] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:15.275 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.275 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:15.534 [2024-11-18 11:53:41.266688] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:16.907 Initializing NVMe Controllers 00:25:16.907 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:16.907 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:25:16.907 Initialization complete. Launching workers. 00:25:16.907 ======================================================== 00:25:16.907 Latency(us) 00:25:16.907 Device Information : IOPS MiB/s Average min max 00:25:16.907 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 75.00 9.37 55633.37 26922.71 191479.99 00:25:16.907 ======================================================== 00:25:16.907 Total : 75.00 9.37 55633.37 26922.71 191479.99 00:25:16.907 00:25:16.907 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:25:16.907 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:25:16.907 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.907 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:17.165 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.165 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1174 00:25:17.165 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1174 -eq 0 ]] 00:25:17.165 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:17.165 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:25:17.165 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:17.165 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:25:17.165 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:17.165 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:25:17.165 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:17.165 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:17.165 rmmod nvme_tcp 00:25:17.165 rmmod nvme_fabrics 00:25:17.165 rmmod nvme_keyring 00:25:17.165 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:17.165 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:25:17.165 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:25:17.165 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3016131 ']' 00:25:17.165 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3016131 00:25:17.165 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 3016131 ']' 00:25:17.165 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 3016131 00:25:17.165 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:25:17.165 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:17.165 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3016131 00:25:17.165 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:17.165 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:17.165 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3016131' 00:25:17.165 killing process with pid 3016131 00:25:17.165 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 3016131 00:25:17.165 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 3016131 00:25:18.538 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:18.538 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:18.538 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:18.538 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:25:18.538 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:25:18.538 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:18.538 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:25:18.538 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:18.538 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:18.538 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.538 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:18.538 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.444 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:20.444 00:25:20.444 real 0m8.869s 00:25:20.444 user 0m5.413s 00:25:20.444 sys 0m2.221s 00:25:20.444 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:20.444 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:20.444 ************************************ 00:25:20.444 END TEST nvmf_wait_for_buf 00:25:20.444 ************************************ 00:25:20.444 11:53:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:25:20.444 11:53:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:20.444 11:53:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:20.444 11:53:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:20.444 11:53:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:20.444 ************************************ 00:25:20.445 START TEST nvmf_fuzz 00:25:20.445 ************************************ 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:20.445 * Looking for test storage... 00:25:20.445 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:20.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.445 --rc genhtml_branch_coverage=1 00:25:20.445 --rc genhtml_function_coverage=1 00:25:20.445 --rc genhtml_legend=1 00:25:20.445 --rc geninfo_all_blocks=1 00:25:20.445 --rc geninfo_unexecuted_blocks=1 00:25:20.445 00:25:20.445 ' 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:20.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.445 --rc genhtml_branch_coverage=1 00:25:20.445 --rc genhtml_function_coverage=1 00:25:20.445 --rc genhtml_legend=1 00:25:20.445 --rc geninfo_all_blocks=1 00:25:20.445 --rc geninfo_unexecuted_blocks=1 00:25:20.445 00:25:20.445 ' 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:20.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.445 --rc genhtml_branch_coverage=1 00:25:20.445 --rc genhtml_function_coverage=1 00:25:20.445 --rc genhtml_legend=1 00:25:20.445 --rc geninfo_all_blocks=1 00:25:20.445 --rc geninfo_unexecuted_blocks=1 00:25:20.445 00:25:20.445 ' 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:20.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.445 --rc genhtml_branch_coverage=1 00:25:20.445 --rc genhtml_function_coverage=1 00:25:20.445 --rc genhtml_legend=1 00:25:20.445 --rc geninfo_all_blocks=1 00:25:20.445 --rc geninfo_unexecuted_blocks=1 00:25:20.445 00:25:20.445 ' 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:20.445 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:20.445 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:20.446 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:20.446 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:20.446 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:20.446 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:20.446 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:20.446 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:20.446 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:20.446 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.446 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:20.446 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.446 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:20.446 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:20.446 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:25:20.446 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:22.977 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:22.977 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.977 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:22.978 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:22.978 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:22.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:22.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:25:22.978 00:25:22.978 --- 10.0.0.2 ping statistics --- 00:25:22.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.978 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:22.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:22.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:25:22.978 00:25:22.978 --- 10.0.0.1 ping statistics --- 00:25:22.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.978 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3018611 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3018611 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 3018611 ']' 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:22.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:22.978 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:23.908 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:23.908 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:25:23.908 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:23.908 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.908 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:23.908 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.908 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:23.908 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.908 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:23.908 Malloc0 00:25:23.908 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.908 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:23.908 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.908 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:23.908 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.908 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:23.908 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.908 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:23.908 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.908 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:23.908 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.908 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:23.909 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.909 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:23.909 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:55.979 Fuzzing completed. Shutting down the fuzz application 00:25:55.980 00:25:55.980 Dumping successful admin opcodes: 00:25:55.980 8, 9, 10, 24, 00:25:55.980 Dumping successful io opcodes: 00:25:55.980 0, 9, 00:25:55.980 NS: 0x2000008efec0 I/O qp, Total commands completed: 330824, total successful commands: 1964, random_seed: 3036390464 00:25:55.980 NS: 0x2000008efec0 admin qp, Total commands completed: 41664, total successful commands: 339, random_seed: 3754747392 00:25:55.980 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:56.238 Fuzzing completed. Shutting down the fuzz application 00:25:56.238 00:25:56.238 Dumping successful admin opcodes: 00:25:56.238 24, 00:25:56.238 Dumping successful io opcodes: 00:25:56.238 00:25:56.238 NS: 0x2000008efec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 568602990 00:25:56.238 NS: 0x2000008efec0 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 568808202 00:25:56.238 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:56.238 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.238 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:56.238 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.238 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:56.238 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:56.238 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:56.238 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:56.238 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:56.238 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:56.238 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:56.238 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:56.238 rmmod nvme_tcp 00:25:56.498 rmmod nvme_fabrics 00:25:56.498 rmmod nvme_keyring 00:25:56.498 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:56.498 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:56.498 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:56.498 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 3018611 ']' 00:25:56.498 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 3018611 00:25:56.498 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 3018611 ']' 00:25:56.498 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 3018611 00:25:56.498 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:25:56.498 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:56.498 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3018611 00:25:56.498 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:56.498 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:56.498 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3018611' 00:25:56.498 killing process with pid 3018611 00:25:56.498 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 3018611 00:25:56.498 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 3018611 00:25:57.879 11:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:57.879 11:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:57.879 11:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:57.879 11:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:57.879 11:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:25:57.879 11:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:25:57.879 11:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:57.879 11:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:57.879 11:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:57.879 11:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.879 11:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:57.879 11:54:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:59.783 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:59.783 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:59.783 00:25:59.783 real 0m39.506s 00:25:59.783 user 0m57.042s 00:25:59.783 sys 0m12.729s 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:00.043 ************************************ 00:26:00.043 END TEST nvmf_fuzz 00:26:00.043 ************************************ 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:00.043 ************************************ 00:26:00.043 START TEST nvmf_multiconnection 00:26:00.043 ************************************ 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:00.043 * Looking for test storage... 00:26:00.043 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lcov --version 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:00.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.043 --rc genhtml_branch_coverage=1 00:26:00.043 --rc genhtml_function_coverage=1 00:26:00.043 --rc genhtml_legend=1 00:26:00.043 --rc geninfo_all_blocks=1 00:26:00.043 --rc geninfo_unexecuted_blocks=1 00:26:00.043 00:26:00.043 ' 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:00.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.043 --rc genhtml_branch_coverage=1 00:26:00.043 --rc genhtml_function_coverage=1 00:26:00.043 --rc genhtml_legend=1 00:26:00.043 --rc geninfo_all_blocks=1 00:26:00.043 --rc geninfo_unexecuted_blocks=1 00:26:00.043 00:26:00.043 ' 00:26:00.043 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:00.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.043 --rc genhtml_branch_coverage=1 00:26:00.043 --rc genhtml_function_coverage=1 00:26:00.044 --rc genhtml_legend=1 00:26:00.044 --rc geninfo_all_blocks=1 00:26:00.044 --rc geninfo_unexecuted_blocks=1 00:26:00.044 00:26:00.044 ' 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:00.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.044 --rc genhtml_branch_coverage=1 00:26:00.044 --rc genhtml_function_coverage=1 00:26:00.044 --rc genhtml_legend=1 00:26:00.044 --rc geninfo_all_blocks=1 00:26:00.044 --rc geninfo_unexecuted_blocks=1 00:26:00.044 00:26:00.044 ' 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:00.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:26:00.044 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.578 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:02.578 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:26:02.578 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:02.578 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:02.578 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:02.578 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:02.578 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:02.578 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:26:02.578 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:02.578 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:26:02.578 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:26:02.578 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:26:02.578 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:26:02.578 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:26:02.578 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:26:02.578 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:02.578 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:02.578 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:02.578 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:02.578 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:02.578 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:02.578 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:02.578 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:02.578 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:02.578 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:02.579 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:02.579 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:02.579 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:02.579 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:02.579 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:02.579 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:02.579 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:02.579 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:02.579 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:02.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:02.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:26:02.579 00:26:02.579 --- 10.0.0.2 ping statistics --- 00:26:02.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.579 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:26:02.579 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:02.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:02.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:26:02.579 00:26:02.579 --- 10.0.0.1 ping statistics --- 00:26:02.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.579 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:26:02.579 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:02.579 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:26:02.579 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:02.579 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:02.579 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:02.579 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:02.579 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:02.579 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:02.579 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:02.579 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:26:02.579 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:02.579 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:02.579 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.579 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=3025213 00:26:02.579 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 3025213 00:26:02.579 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 3025213 ']' 00:26:02.579 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:02.579 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:02.579 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:02.579 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:02.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:02.579 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:02.580 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:02.580 [2024-11-18 11:54:28.187893] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:26:02.580 [2024-11-18 11:54:28.188054] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:02.580 [2024-11-18 11:54:28.347001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:02.838 [2024-11-18 11:54:28.490391] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:02.838 [2024-11-18 11:54:28.490483] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:02.838 [2024-11-18 11:54:28.490521] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:02.838 [2024-11-18 11:54:28.490547] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:02.838 [2024-11-18 11:54:28.490568] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:02.838 [2024-11-18 11:54:28.493486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:02.838 [2024-11-18 11:54:28.493557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:02.838 [2024-11-18 11:54:28.493585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.838 [2024-11-18 11:54:28.493594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:03.407 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:03.407 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:26:03.407 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:03.407 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:03.407 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.407 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:03.407 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:03.407 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.407 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.408 [2024-11-18 11:54:29.200164] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:03.408 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.408 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:26:03.408 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:03.408 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:03.408 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.408 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.667 Malloc1 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.667 [2024-11-18 11:54:29.321194] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.667 Malloc2 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.667 Malloc3 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.667 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.927 Malloc4 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.927 Malloc5 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.927 Malloc6 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.927 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.187 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.187 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:26:04.187 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.187 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.187 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.187 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:04.187 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:26:04.187 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.187 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.187 Malloc7 00:26:04.187 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.187 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:26:04.187 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.187 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.187 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.187 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:26:04.187 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.187 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.187 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.187 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:26:04.187 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.187 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.187 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.187 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:04.188 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:26:04.188 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.188 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.188 Malloc8 00:26:04.188 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.188 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:26:04.188 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.188 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.188 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.188 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:26:04.188 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.188 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.188 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.188 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:26:04.188 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.188 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.188 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.188 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:04.188 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:26:04.188 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.188 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.448 Malloc9 00:26:04.448 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.448 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:26:04.448 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.448 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.448 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.448 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:26:04.448 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.448 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.448 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.448 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:26:04.448 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.448 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.448 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.448 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:04.448 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:26:04.448 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.448 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.448 Malloc10 00:26:04.448 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.448 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:26:04.448 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.448 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.448 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.448 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:26:04.448 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.448 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.448 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.448 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:26:04.448 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.448 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.448 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.449 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:04.449 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:26:04.449 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.449 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.449 Malloc11 00:26:04.449 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.449 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:26:04.449 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.449 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.449 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.449 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:26:04.449 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.449 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.707 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.707 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:26:04.707 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.707 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.707 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.707 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:26:04.707 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:04.707 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:05.274 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:26:05.274 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:05.274 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:05.274 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:05.274 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:07.178 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:07.178 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:07.178 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:26:07.178 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:07.178 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:07.178 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:07.178 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.178 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:26:08.115 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:26:08.115 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:08.115 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:08.115 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:08.115 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:10.017 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:10.017 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:10.017 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:26:10.017 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:10.017 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:10.017 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:10.017 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:10.017 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:26:10.585 11:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:26:10.585 11:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:10.585 11:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:10.585 11:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:10.585 11:54:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:12.552 11:54:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:12.552 11:54:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:12.552 11:54:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:26:12.552 11:54:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:12.552 11:54:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:12.552 11:54:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:12.552 11:54:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:12.552 11:54:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:13.488 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:13.488 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:13.489 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:13.489 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:13.489 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:15.394 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:15.394 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:15.394 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:26:15.394 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:15.394 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:15.394 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:15.394 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:15.394 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:15.962 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:15.962 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:15.962 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:15.962 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:15.962 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:18.497 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:18.497 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:18.497 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:26:18.497 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:18.497 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:18.497 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:18.497 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:18.497 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:19.065 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:19.065 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:19.065 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:19.065 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:19.066 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:20.970 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:20.970 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:20.970 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:26:20.970 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:20.970 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:20.970 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:20.970 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:20.970 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:21.907 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:21.907 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:21.907 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:21.907 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:21.907 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:23.812 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:23.812 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:23.812 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:26:23.812 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:23.812 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:23.812 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:23.812 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:23.812 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:24.748 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:24.748 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:24.748 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:24.748 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:24.748 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:26.655 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:26.655 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:26.655 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:26:26.655 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:26.655 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:26.655 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:26.655 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:26.655 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:27.595 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:27.595 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:27.595 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:27.595 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:27.595 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:29.498 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:29.498 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:29.498 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:26:29.498 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:29.498 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:29.498 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:29.498 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.498 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:30.433 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:30.433 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:30.433 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:30.433 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:30.433 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:32.969 11:54:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:32.969 11:54:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:32.969 11:54:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:26:32.969 11:54:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:32.969 11:54:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:32.969 11:54:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:32.969 11:54:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:32.969 11:54:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:33.228 11:54:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:33.228 11:54:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:33.228 11:54:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:33.228 11:54:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:33.228 11:54:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:35.759 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:35.760 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:35.760 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:26:35.760 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:35.760 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:35.760 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:35.760 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:35.760 [global] 00:26:35.760 thread=1 00:26:35.760 invalidate=1 00:26:35.760 rw=read 00:26:35.760 time_based=1 00:26:35.760 runtime=10 00:26:35.760 ioengine=libaio 00:26:35.760 direct=1 00:26:35.760 bs=262144 00:26:35.760 iodepth=64 00:26:35.760 norandommap=1 00:26:35.760 numjobs=1 00:26:35.760 00:26:35.760 [job0] 00:26:35.760 filename=/dev/nvme0n1 00:26:35.760 [job1] 00:26:35.760 filename=/dev/nvme10n1 00:26:35.760 [job2] 00:26:35.760 filename=/dev/nvme1n1 00:26:35.760 [job3] 00:26:35.760 filename=/dev/nvme2n1 00:26:35.760 [job4] 00:26:35.760 filename=/dev/nvme3n1 00:26:35.760 [job5] 00:26:35.760 filename=/dev/nvme4n1 00:26:35.760 [job6] 00:26:35.760 filename=/dev/nvme5n1 00:26:35.760 [job7] 00:26:35.760 filename=/dev/nvme6n1 00:26:35.760 [job8] 00:26:35.760 filename=/dev/nvme7n1 00:26:35.760 [job9] 00:26:35.760 filename=/dev/nvme8n1 00:26:35.760 [job10] 00:26:35.760 filename=/dev/nvme9n1 00:26:35.760 Could not set queue depth (nvme0n1) 00:26:35.760 Could not set queue depth (nvme10n1) 00:26:35.760 Could not set queue depth (nvme1n1) 00:26:35.760 Could not set queue depth (nvme2n1) 00:26:35.760 Could not set queue depth (nvme3n1) 00:26:35.760 Could not set queue depth (nvme4n1) 00:26:35.760 Could not set queue depth (nvme5n1) 00:26:35.760 Could not set queue depth (nvme6n1) 00:26:35.760 Could not set queue depth (nvme7n1) 00:26:35.760 Could not set queue depth (nvme8n1) 00:26:35.760 Could not set queue depth (nvme9n1) 00:26:35.760 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:35.760 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:35.760 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:35.760 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:35.760 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:35.760 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:35.760 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:35.760 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:35.760 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:35.760 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:35.760 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:35.760 fio-3.35 00:26:35.760 Starting 11 threads 00:26:47.967 00:26:47.967 job0: (groupid=0, jobs=1): err= 0: pid=3029609: Mon Nov 18 11:55:12 2024 00:26:47.967 read: IOPS=649, BW=162MiB/s (170MB/s)(1662MiB/10230msec) 00:26:47.967 slat (usec): min=11, max=669458, avg=1351.42, stdev=11083.29 00:26:47.967 clat (usec): min=1236, max=1130.0k, avg=97063.17, stdev=149809.65 00:26:47.967 lat (usec): min=1307, max=1501.7k, avg=98414.60, stdev=151523.48 00:26:47.967 clat percentiles (msec): 00:26:47.967 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 21], 20.00th=[ 36], 00:26:47.967 | 30.00th=[ 40], 40.00th=[ 43], 50.00th=[ 45], 60.00th=[ 51], 00:26:47.967 | 70.00th=[ 57], 80.00th=[ 111], 90.00th=[ 234], 95.00th=[ 401], 00:26:47.967 | 99.00th=[ 852], 99.50th=[ 1053], 99.90th=[ 1133], 99.95th=[ 1133], 00:26:47.967 | 99.99th=[ 1133] 00:26:47.967 bw ( KiB/s): min=31232, max=419840, per=22.03%, avg=168524.80, stdev=127195.01, samples=20 00:26:47.967 iops : min= 122, max= 1640, avg=658.30, stdev=496.86, samples=20 00:26:47.967 lat (msec) : 2=0.09%, 4=0.21%, 10=5.42%, 20=3.76%, 50=50.27% 00:26:47.967 lat (msec) : 100=18.51%, 250=13.02%, 500=6.00%, 750=1.32%, 1000=0.65% 00:26:47.967 lat (msec) : 2000=0.75% 00:26:47.967 cpu : usr=0.44%, sys=2.19%, ctx=1464, majf=0, minf=4097 00:26:47.967 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:26:47.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:47.967 issued rwts: total=6646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.967 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:47.967 job1: (groupid=0, jobs=1): err= 0: pid=3029611: Mon Nov 18 11:55:12 2024 00:26:47.967 read: IOPS=103, BW=25.9MiB/s (27.1MB/s)(268MiB/10337msec) 00:26:47.967 slat (usec): min=8, max=427033, avg=6341.01, stdev=35412.77 00:26:47.967 clat (msec): min=43, max=1529, avg=611.34, stdev=351.96 00:26:47.967 lat (msec): min=43, max=1580, avg=617.69, stdev=355.53 00:26:47.967 clat percentiles (msec): 00:26:47.967 | 1.00th=[ 45], 5.00th=[ 150], 10.00th=[ 182], 20.00th=[ 251], 00:26:47.967 | 30.00th=[ 351], 40.00th=[ 443], 50.00th=[ 542], 60.00th=[ 768], 00:26:47.967 | 70.00th=[ 835], 80.00th=[ 919], 90.00th=[ 1036], 95.00th=[ 1217], 00:26:47.967 | 99.00th=[ 1435], 99.50th=[ 1452], 99.90th=[ 1452], 99.95th=[ 1536], 00:26:47.967 | 99.99th=[ 1536] 00:26:47.967 bw ( KiB/s): min= 3584, max=87552, per=3.54%, avg=27109.05, stdev=19287.20, samples=19 00:26:47.967 iops : min= 14, max= 342, avg=105.89, stdev=75.34, samples=19 00:26:47.967 lat (msec) : 50=2.52%, 100=0.37%, 250=16.54%, 500=29.81%, 750=9.35% 00:26:47.967 lat (msec) : 1000=28.60%, 2000=12.80% 00:26:47.967 cpu : usr=0.06%, sys=0.30%, ctx=93, majf=0, minf=4097 00:26:47.967 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=3.0%, >=64=94.1% 00:26:47.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.967 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:47.967 issued rwts: total=1070,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.967 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:47.967 job2: (groupid=0, jobs=1): err= 0: pid=3029615: Mon Nov 18 11:55:12 2024 00:26:47.967 read: IOPS=219, BW=54.9MiB/s (57.5MB/s)(567MiB/10332msec) 00:26:47.967 slat (usec): min=8, max=627070, avg=3457.97, stdev=23183.69 00:26:47.967 clat (usec): min=1519, max=1252.6k, avg=287894.36, stdev=266121.66 00:26:47.967 lat (usec): min=1549, max=1466.8k, avg=291352.33, stdev=269644.75 00:26:47.967 clat percentiles (msec): 00:26:47.967 | 1.00th=[ 4], 5.00th=[ 20], 10.00th=[ 71], 20.00th=[ 109], 00:26:47.967 | 30.00th=[ 129], 40.00th=[ 142], 50.00th=[ 190], 60.00th=[ 232], 00:26:47.967 | 70.00th=[ 300], 80.00th=[ 443], 90.00th=[ 751], 95.00th=[ 844], 00:26:47.967 | 99.00th=[ 1217], 99.50th=[ 1234], 99.90th=[ 1250], 99.95th=[ 1250], 00:26:47.967 | 99.99th=[ 1250] 00:26:47.967 bw ( KiB/s): min=11264, max=149504, per=7.37%, avg=56403.25, stdev=39992.72, samples=20 00:26:47.967 iops : min= 44, max= 584, avg=220.30, stdev=156.22, samples=20 00:26:47.967 lat (msec) : 2=0.09%, 4=2.87%, 10=0.26%, 20=2.51%, 50=1.10% 00:26:47.967 lat (msec) : 100=10.32%, 250=47.15%, 500=19.41%, 750=6.53%, 1000=7.15% 00:26:47.967 lat (msec) : 2000=2.60% 00:26:47.967 cpu : usr=0.08%, sys=0.63%, ctx=495, majf=0, minf=4097 00:26:47.967 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:26:47.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:47.967 issued rwts: total=2267,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.967 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:47.967 job3: (groupid=0, jobs=1): err= 0: pid=3029616: Mon Nov 18 11:55:12 2024 00:26:47.967 read: IOPS=89, BW=22.4MiB/s (23.5MB/s)(232MiB/10335msec) 00:26:47.967 slat (usec): min=13, max=391570, avg=10842.91, stdev=44898.36 00:26:47.967 clat (msec): min=33, max=1273, avg=702.03, stdev=239.01 00:26:47.967 lat (msec): min=33, max=1274, avg=712.87, stdev=243.62 00:26:47.967 clat percentiles (msec): 00:26:47.967 | 1.00th=[ 215], 5.00th=[ 255], 10.00th=[ 414], 20.00th=[ 489], 00:26:47.967 | 30.00th=[ 558], 40.00th=[ 634], 50.00th=[ 718], 60.00th=[ 802], 00:26:47.967 | 70.00th=[ 844], 80.00th=[ 911], 90.00th=[ 1020], 95.00th=[ 1070], 00:26:47.967 | 99.00th=[ 1217], 99.50th=[ 1250], 99.90th=[ 1267], 99.95th=[ 1267], 00:26:47.967 | 99.99th=[ 1267] 00:26:47.967 bw ( KiB/s): min=12288, max=35840, per=2.89%, avg=22096.00, stdev=6988.21, samples=20 00:26:47.967 iops : min= 48, max= 140, avg=86.30, stdev=27.28, samples=20 00:26:47.967 lat (msec) : 50=0.11%, 250=3.56%, 500=18.12%, 750=32.25%, 1000=34.52% 00:26:47.967 lat (msec) : 2000=11.43% 00:26:47.967 cpu : usr=0.06%, sys=0.34%, ctx=86, majf=0, minf=4097 00:26:47.967 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.5%, >=64=93.2% 00:26:47.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.967 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:47.967 issued rwts: total=927,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.967 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:47.967 job4: (groupid=0, jobs=1): err= 0: pid=3029617: Mon Nov 18 11:55:12 2024 00:26:47.967 read: IOPS=256, BW=64.2MiB/s (67.3MB/s)(664MiB/10336msec) 00:26:47.967 slat (usec): min=8, max=519965, avg=2481.29, stdev=20795.40 00:26:47.967 clat (usec): min=1656, max=1329.7k, avg=246498.22, stdev=276758.24 00:26:47.967 lat (usec): min=1702, max=1329.8k, avg=248979.51, stdev=280293.03 00:26:47.967 clat percentiles (msec): 00:26:47.967 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 14], 20.00th=[ 23], 00:26:47.967 | 30.00th=[ 60], 40.00th=[ 82], 50.00th=[ 132], 60.00th=[ 207], 00:26:47.967 | 70.00th=[ 271], 80.00th=[ 418], 90.00th=[ 768], 95.00th=[ 860], 00:26:47.967 | 99.00th=[ 986], 99.50th=[ 1020], 99.90th=[ 1150], 99.95th=[ 1200], 00:26:47.967 | 99.99th=[ 1334] 00:26:47.967 bw ( KiB/s): min= 8704, max=189952, per=8.67%, avg=66329.60, stdev=50844.81, samples=20 00:26:47.967 iops : min= 34, max= 742, avg=259.10, stdev=198.61, samples=20 00:26:47.967 lat (msec) : 2=0.08%, 4=4.41%, 10=1.47%, 20=7.27%, 50=12.06% 00:26:47.967 lat (msec) : 100=19.89%, 250=22.16%, 500=15.49%, 750=6.29%, 1000=9.98% 00:26:47.967 lat (msec) : 2000=0.90% 00:26:47.967 cpu : usr=0.15%, sys=0.66%, ctx=962, majf=0, minf=4097 00:26:47.967 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:26:47.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:47.967 issued rwts: total=2654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.967 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:47.967 job5: (groupid=0, jobs=1): err= 0: pid=3029624: Mon Nov 18 11:55:12 2024 00:26:47.967 read: IOPS=101, BW=25.3MiB/s (26.5MB/s)(261MiB/10336msec) 00:26:47.967 slat (usec): min=13, max=352981, avg=9702.86, stdev=37706.61 00:26:47.967 clat (msec): min=91, max=1112, avg=622.74, stdev=211.90 00:26:47.967 lat (msec): min=92, max=1112, avg=632.44, stdev=216.80 00:26:47.967 clat percentiles (msec): 00:26:47.967 | 1.00th=[ 102], 5.00th=[ 161], 10.00th=[ 388], 20.00th=[ 439], 00:26:47.967 | 30.00th=[ 542], 40.00th=[ 609], 50.00th=[ 625], 60.00th=[ 684], 00:26:47.967 | 70.00th=[ 735], 80.00th=[ 818], 90.00th=[ 877], 95.00th=[ 936], 00:26:47.967 | 99.00th=[ 1070], 99.50th=[ 1116], 99.90th=[ 1116], 99.95th=[ 1116], 00:26:47.967 | 99.99th=[ 1116] 00:26:47.968 bw ( KiB/s): min= 8704, max=44032, per=3.28%, avg=25113.60, stdev=8405.59, samples=20 00:26:47.968 iops : min= 34, max= 172, avg=98.10, stdev=32.83, samples=20 00:26:47.968 lat (msec) : 100=0.48%, 250=5.55%, 500=22.78%, 750=43.64%, 1000=23.44% 00:26:47.968 lat (msec) : 2000=4.11% 00:26:47.968 cpu : usr=0.03%, sys=0.38%, ctx=97, majf=0, minf=4097 00:26:47.968 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.1%, >=64=94.0% 00:26:47.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.968 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:47.968 issued rwts: total=1045,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.968 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:47.968 job6: (groupid=0, jobs=1): err= 0: pid=3029625: Mon Nov 18 11:55:12 2024 00:26:47.968 read: IOPS=87, BW=22.0MiB/s (23.0MB/s)(227MiB/10336msec) 00:26:47.968 slat (usec): min=13, max=373805, avg=11162.65, stdev=39410.18 00:26:47.968 clat (msec): min=195, max=1250, avg=716.71, stdev=235.11 00:26:47.968 lat (msec): min=251, max=1445, avg=727.88, stdev=238.80 00:26:47.968 clat percentiles (msec): 00:26:47.968 | 1.00th=[ 253], 5.00th=[ 257], 10.00th=[ 414], 20.00th=[ 472], 00:26:47.968 | 30.00th=[ 558], 40.00th=[ 659], 50.00th=[ 751], 60.00th=[ 802], 00:26:47.968 | 70.00th=[ 869], 80.00th=[ 936], 90.00th=[ 1003], 95.00th=[ 1083], 00:26:47.968 | 99.00th=[ 1116], 99.50th=[ 1200], 99.90th=[ 1250], 99.95th=[ 1250], 00:26:47.968 | 99.99th=[ 1250] 00:26:47.968 bw ( KiB/s): min= 6144, max=39936, per=2.82%, avg=21606.40, stdev=7340.32, samples=20 00:26:47.968 iops : min= 24, max= 156, avg=84.40, stdev=28.67, samples=20 00:26:47.968 lat (msec) : 250=0.11%, 500=22.58%, 750=28.19%, 1000=37.56%, 2000=11.56% 00:26:47.968 cpu : usr=0.03%, sys=0.38%, ctx=109, majf=0, minf=4097 00:26:47.968 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.1% 00:26:47.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.968 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:47.968 issued rwts: total=908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.968 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:47.968 job7: (groupid=0, jobs=1): err= 0: pid=3029626: Mon Nov 18 11:55:12 2024 00:26:47.968 read: IOPS=100, BW=25.2MiB/s (26.4MB/s)(261MiB/10336msec) 00:26:47.968 slat (usec): min=10, max=343724, avg=7194.95, stdev=34687.34 00:26:47.968 clat (msec): min=53, max=1537, avg=627.12, stdev=260.72 00:26:47.968 lat (msec): min=53, max=1537, avg=634.32, stdev=265.22 00:26:47.968 clat percentiles (msec): 00:26:47.968 | 1.00th=[ 55], 5.00th=[ 279], 10.00th=[ 300], 20.00th=[ 430], 00:26:47.968 | 30.00th=[ 451], 40.00th=[ 523], 50.00th=[ 625], 60.00th=[ 693], 00:26:47.968 | 70.00th=[ 760], 80.00th=[ 827], 90.00th=[ 995], 95.00th=[ 1116], 00:26:47.968 | 99.00th=[ 1267], 99.50th=[ 1334], 99.90th=[ 1435], 99.95th=[ 1536], 00:26:47.968 | 99.99th=[ 1536] 00:26:47.968 bw ( KiB/s): min= 9728, max=58880, per=3.27%, avg=25036.80, stdev=12071.52, samples=20 00:26:47.968 iops : min= 38, max= 230, avg=97.80, stdev=47.15, samples=20 00:26:47.968 lat (msec) : 100=2.21%, 250=0.58%, 500=36.08%, 750=29.85%, 1000=22.55% 00:26:47.968 lat (msec) : 2000=8.73% 00:26:47.968 cpu : usr=0.03%, sys=0.34%, ctx=110, majf=0, minf=3721 00:26:47.968 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.1%, >=64=94.0% 00:26:47.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.968 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:47.968 issued rwts: total=1042,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.968 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:47.968 job8: (groupid=0, jobs=1): err= 0: pid=3029627: Mon Nov 18 11:55:12 2024 00:26:47.968 read: IOPS=138, BW=34.5MiB/s (36.2MB/s)(357MiB/10340msec) 00:26:47.968 slat (usec): min=9, max=706521, avg=5364.25, stdev=38772.29 00:26:47.968 clat (usec): min=774, max=1664.6k, avg=457595.01, stdev=405040.84 00:26:47.968 lat (usec): min=835, max=1664.7k, avg=462959.26, stdev=410330.43 00:26:47.968 clat percentiles (usec): 00:26:47.968 | 1.00th=[ 873], 5.00th=[ 4228], 10.00th=[ 13566], 00:26:47.968 | 20.00th=[ 25035], 30.00th=[ 37487], 40.00th=[ 162530], 00:26:47.968 | 50.00th=[ 509608], 60.00th=[ 650118], 70.00th=[ 717226], 00:26:47.968 | 80.00th=[ 817890], 90.00th=[1019216], 95.00th=[1115685], 00:26:47.968 | 99.00th=[1400898], 99.50th=[1400898], 99.90th=[1669333], 00:26:47.968 | 99.95th=[1669333], 99.99th=[1669333] 00:26:47.968 bw ( KiB/s): min= 1536, max=143872, per=4.56%, avg=34918.40, stdev=35955.92, samples=20 00:26:47.968 iops : min= 6, max= 562, avg=136.40, stdev=140.45, samples=20 00:26:47.968 lat (usec) : 1000=2.10% 00:26:47.968 lat (msec) : 2=1.54%, 4=1.33%, 10=2.87%, 20=4.20%, 50=23.74% 00:26:47.968 lat (msec) : 100=3.57%, 250=1.82%, 500=8.47%, 750=25.42%, 1000=13.52% 00:26:47.968 lat (msec) : 2000=11.41% 00:26:47.968 cpu : usr=0.11%, sys=0.46%, ctx=514, majf=0, minf=4097 00:26:47.968 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.2%, >=64=95.6% 00:26:47.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.968 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:47.968 issued rwts: total=1428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.968 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:47.968 job9: (groupid=0, jobs=1): err= 0: pid=3029628: Mon Nov 18 11:55:12 2024 00:26:47.968 read: IOPS=452, BW=113MiB/s (119MB/s)(1141MiB/10084msec) 00:26:47.968 slat (usec): min=8, max=310337, avg=913.98, stdev=9188.98 00:26:47.968 clat (usec): min=851, max=1258.2k, avg=140384.90, stdev=224730.92 00:26:47.968 lat (usec): min=874, max=1258.3k, avg=141298.89, stdev=226198.71 00:26:47.968 clat percentiles (msec): 00:26:47.968 | 1.00th=[ 5], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 12], 00:26:47.968 | 30.00th=[ 24], 40.00th=[ 51], 50.00th=[ 64], 60.00th=[ 82], 00:26:47.968 | 70.00th=[ 101], 80.00th=[ 146], 90.00th=[ 447], 95.00th=[ 751], 00:26:47.968 | 99.00th=[ 1062], 99.50th=[ 1083], 99.90th=[ 1167], 99.95th=[ 1183], 00:26:47.968 | 99.99th=[ 1250] 00:26:47.968 bw ( KiB/s): min=13312, max=269312, per=15.06%, avg=115225.60, stdev=83735.25, samples=20 00:26:47.968 iops : min= 52, max= 1052, avg=450.10, stdev=327.09, samples=20 00:26:47.968 lat (usec) : 1000=0.07% 00:26:47.968 lat (msec) : 2=0.13%, 4=0.61%, 10=12.66%, 20=13.32%, 50=13.17% 00:26:47.968 lat (msec) : 100=29.67%, 250=17.13%, 500=3.72%, 750=4.43%, 1000=3.72% 00:26:47.968 lat (msec) : 2000=1.36% 00:26:47.968 cpu : usr=0.24%, sys=1.16%, ctx=1259, majf=0, minf=4097 00:26:47.968 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:47.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:47.968 issued rwts: total=4564,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.968 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:47.968 job10: (groupid=0, jobs=1): err= 0: pid=3029629: Mon Nov 18 11:55:12 2024 00:26:47.968 read: IOPS=807, BW=202MiB/s (212MB/s)(2088MiB/10339msec) 00:26:47.968 slat (usec): min=11, max=238637, avg=1015.51, stdev=5712.83 00:26:47.968 clat (msec): min=2, max=1017, avg=78.13, stdev=86.33 00:26:47.968 lat (msec): min=2, max=1017, avg=79.14, stdev=87.20 00:26:47.968 clat percentiles (msec): 00:26:47.968 | 1.00th=[ 9], 5.00th=[ 25], 10.00th=[ 39], 20.00th=[ 44], 00:26:47.968 | 30.00th=[ 46], 40.00th=[ 49], 50.00th=[ 52], 60.00th=[ 57], 00:26:47.968 | 70.00th=[ 71], 80.00th=[ 89], 90.00th=[ 134], 95.00th=[ 220], 00:26:47.968 | 99.00th=[ 498], 99.50th=[ 642], 99.90th=[ 852], 99.95th=[ 852], 00:26:47.968 | 99.99th=[ 1020] 00:26:47.968 bw ( KiB/s): min=54784, max=361984, per=27.73%, avg=212152.80, stdev=100896.17, samples=20 00:26:47.968 iops : min= 214, max= 1414, avg=828.70, stdev=394.16, samples=20 00:26:47.968 lat (msec) : 4=0.02%, 10=3.29%, 20=0.44%, 50=41.18%, 100=38.69% 00:26:47.968 lat (msec) : 250=11.93%, 500=3.51%, 750=0.57%, 1000=0.34%, 2000=0.02% 00:26:47.968 cpu : usr=0.52%, sys=2.87%, ctx=1757, majf=0, minf=4097 00:26:47.968 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:26:47.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:47.968 issued rwts: total=8351,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.968 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:47.968 00:26:47.968 Run status group 0 (all jobs): 00:26:47.968 READ: bw=747MiB/s (783MB/s), 22.0MiB/s-202MiB/s (23.0MB/s-212MB/s), io=7726MiB (8101MB), run=10084-10340msec 00:26:47.968 00:26:47.968 Disk stats (read/write): 00:26:47.968 nvme0n1: ios=13281/0, merge=0/0, ticks=1258880/0, in_queue=1258880, util=97.31% 00:26:47.968 nvme10n1: ios=2034/0, merge=0/0, ticks=1233271/0, in_queue=1233271, util=97.51% 00:26:47.968 nvme1n1: ios=4458/0, merge=0/0, ticks=1237847/0, in_queue=1237847, util=97.76% 00:26:47.968 nvme2n1: ios=1775/0, merge=0/0, ticks=1235544/0, in_queue=1235544, util=97.92% 00:26:47.968 nvme3n1: ios=5288/0, merge=0/0, ticks=1256872/0, in_queue=1256872, util=98.00% 00:26:47.968 nvme4n1: ios=1988/0, merge=0/0, ticks=1231878/0, in_queue=1231878, util=98.32% 00:26:47.968 nvme5n1: ios=1738/0, merge=0/0, ticks=1243689/0, in_queue=1243689, util=98.46% 00:26:47.968 nvme6n1: ios=1989/0, merge=0/0, ticks=1232881/0, in_queue=1232881, util=98.60% 00:26:47.968 nvme7n1: ios=2750/0, merge=0/0, ticks=1220823/0, in_queue=1220823, util=98.96% 00:26:47.968 nvme8n1: ios=8925/0, merge=0/0, ticks=1247270/0, in_queue=1247270, util=99.10% 00:26:47.968 nvme9n1: ios=16607/0, merge=0/0, ticks=1241440/0, in_queue=1241440, util=99.25% 00:26:47.968 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:47.968 [global] 00:26:47.968 thread=1 00:26:47.968 invalidate=1 00:26:47.968 rw=randwrite 00:26:47.968 time_based=1 00:26:47.968 runtime=10 00:26:47.968 ioengine=libaio 00:26:47.968 direct=1 00:26:47.968 bs=262144 00:26:47.968 iodepth=64 00:26:47.968 norandommap=1 00:26:47.968 numjobs=1 00:26:47.968 00:26:47.968 [job0] 00:26:47.968 filename=/dev/nvme0n1 00:26:47.968 [job1] 00:26:47.968 filename=/dev/nvme10n1 00:26:47.968 [job2] 00:26:47.968 filename=/dev/nvme1n1 00:26:47.968 [job3] 00:26:47.968 filename=/dev/nvme2n1 00:26:47.968 [job4] 00:26:47.968 filename=/dev/nvme3n1 00:26:47.968 [job5] 00:26:47.968 filename=/dev/nvme4n1 00:26:47.968 [job6] 00:26:47.968 filename=/dev/nvme5n1 00:26:47.969 [job7] 00:26:47.969 filename=/dev/nvme6n1 00:26:47.969 [job8] 00:26:47.969 filename=/dev/nvme7n1 00:26:47.969 [job9] 00:26:47.969 filename=/dev/nvme8n1 00:26:47.969 [job10] 00:26:47.969 filename=/dev/nvme9n1 00:26:47.969 Could not set queue depth (nvme0n1) 00:26:47.969 Could not set queue depth (nvme10n1) 00:26:47.969 Could not set queue depth (nvme1n1) 00:26:47.969 Could not set queue depth (nvme2n1) 00:26:47.969 Could not set queue depth (nvme3n1) 00:26:47.969 Could not set queue depth (nvme4n1) 00:26:47.969 Could not set queue depth (nvme5n1) 00:26:47.969 Could not set queue depth (nvme6n1) 00:26:47.969 Could not set queue depth (nvme7n1) 00:26:47.969 Could not set queue depth (nvme8n1) 00:26:47.969 Could not set queue depth (nvme9n1) 00:26:47.969 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:47.969 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:47.969 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:47.969 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:47.969 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:47.969 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:47.969 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:47.969 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:47.969 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:47.969 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:47.969 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:47.969 fio-3.35 00:26:47.969 Starting 11 threads 00:26:57.947 00:26:57.947 job0: (groupid=0, jobs=1): err= 0: pid=3030353: Mon Nov 18 11:55:23 2024 00:26:57.948 write: IOPS=244, BW=61.2MiB/s (64.2MB/s)(623MiB/10183msec); 0 zone resets 00:26:57.948 slat (usec): min=23, max=78873, avg=2795.29, stdev=7615.97 00:26:57.948 clat (msec): min=5, max=595, avg=258.47, stdev=132.58 00:26:57.948 lat (msec): min=5, max=602, avg=261.26, stdev=134.39 00:26:57.948 clat percentiles (msec): 00:26:57.948 | 1.00th=[ 18], 5.00th=[ 42], 10.00th=[ 72], 20.00th=[ 128], 00:26:57.948 | 30.00th=[ 186], 40.00th=[ 224], 50.00th=[ 253], 60.00th=[ 309], 00:26:57.948 | 70.00th=[ 347], 80.00th=[ 376], 90.00th=[ 426], 95.00th=[ 477], 00:26:57.948 | 99.00th=[ 558], 99.50th=[ 567], 99.90th=[ 592], 99.95th=[ 592], 00:26:57.948 | 99.99th=[ 600] 00:26:57.948 bw ( KiB/s): min=30208, max=131584, per=7.24%, avg=62216.45, stdev=26993.47, samples=20 00:26:57.948 iops : min= 118, max= 514, avg=243.00, stdev=105.41, samples=20 00:26:57.948 lat (msec) : 10=0.08%, 20=1.16%, 50=4.57%, 100=8.34%, 250=35.46% 00:26:57.948 lat (msec) : 500=46.81%, 750=3.57% 00:26:57.948 cpu : usr=0.78%, sys=0.94%, ctx=1471, majf=0, minf=1 00:26:57.948 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:26:57.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:57.948 issued rwts: total=0,2493,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.948 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:57.948 job1: (groupid=0, jobs=1): err= 0: pid=3030365: Mon Nov 18 11:55:23 2024 00:26:57.948 write: IOPS=463, BW=116MiB/s (121MB/s)(1165MiB/10058msec); 0 zone resets 00:26:57.948 slat (usec): min=22, max=83580, avg=1412.91, stdev=3787.01 00:26:57.948 clat (usec): min=1838, max=450072, avg=136653.70, stdev=85097.04 00:26:57.948 lat (msec): min=2, max=456, avg=138.07, stdev=85.71 00:26:57.948 clat percentiles (msec): 00:26:57.948 | 1.00th=[ 11], 5.00th=[ 36], 10.00th=[ 62], 20.00th=[ 67], 00:26:57.948 | 30.00th=[ 74], 40.00th=[ 105], 50.00th=[ 133], 60.00th=[ 142], 00:26:57.948 | 70.00th=[ 148], 80.00th=[ 178], 90.00th=[ 264], 95.00th=[ 330], 00:26:57.948 | 99.00th=[ 405], 99.50th=[ 422], 99.90th=[ 443], 99.95th=[ 451], 00:26:57.948 | 99.99th=[ 451] 00:26:57.948 bw ( KiB/s): min=47198, max=213504, per=13.69%, avg=117662.30, stdev=52076.19, samples=20 00:26:57.948 iops : min= 184, max= 834, avg=459.60, stdev=203.45, samples=20 00:26:57.948 lat (msec) : 2=0.02%, 4=0.04%, 10=0.75%, 20=2.04%, 50=5.13% 00:26:57.948 lat (msec) : 100=30.74%, 250=49.84%, 500=11.44% 00:26:57.948 cpu : usr=1.73%, sys=1.76%, ctx=2321, majf=0, minf=1 00:26:57.948 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:26:57.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:57.948 issued rwts: total=0,4659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.948 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:57.948 job2: (groupid=0, jobs=1): err= 0: pid=3030366: Mon Nov 18 11:55:23 2024 00:26:57.948 write: IOPS=226, BW=56.7MiB/s (59.4MB/s)(577MiB/10185msec); 0 zone resets 00:26:57.948 slat (usec): min=22, max=118466, avg=4249.44, stdev=8602.16 00:26:57.948 clat (msec): min=92, max=606, avg=278.04, stdev=106.35 00:26:57.948 lat (msec): min=100, max=606, avg=282.29, stdev=107.38 00:26:57.948 clat percentiles (msec): 00:26:57.948 | 1.00th=[ 103], 5.00th=[ 126], 10.00th=[ 133], 20.00th=[ 159], 00:26:57.948 | 30.00th=[ 209], 40.00th=[ 249], 50.00th=[ 288], 60.00th=[ 313], 00:26:57.948 | 70.00th=[ 338], 80.00th=[ 368], 90.00th=[ 405], 95.00th=[ 430], 00:26:57.948 | 99.00th=[ 584], 99.50th=[ 600], 99.90th=[ 609], 99.95th=[ 609], 00:26:57.948 | 99.99th=[ 609] 00:26:57.948 bw ( KiB/s): min=32768, max=110592, per=6.68%, avg=57446.40, stdev=22241.76, samples=20 00:26:57.948 iops : min= 128, max= 432, avg=224.40, stdev=86.88, samples=20 00:26:57.948 lat (msec) : 100=0.30%, 250=40.94%, 500=56.41%, 750=2.34% 00:26:57.948 cpu : usr=0.82%, sys=0.64%, ctx=591, majf=0, minf=1 00:26:57.948 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:26:57.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:57.948 issued rwts: total=0,2308,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.948 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:57.948 job3: (groupid=0, jobs=1): err= 0: pid=3030367: Mon Nov 18 11:55:23 2024 00:26:57.948 write: IOPS=248, BW=62.2MiB/s (65.2MB/s)(637MiB/10234msec); 0 zone resets 00:26:57.948 slat (usec): min=23, max=128986, avg=2550.06, stdev=7505.40 00:26:57.948 clat (usec): min=1845, max=590710, avg=254446.25, stdev=142501.13 00:26:57.948 lat (usec): min=1930, max=613721, avg=256996.31, stdev=144085.34 00:26:57.948 clat percentiles (msec): 00:26:57.948 | 1.00th=[ 5], 5.00th=[ 10], 10.00th=[ 41], 20.00th=[ 91], 00:26:57.948 | 30.00th=[ 178], 40.00th=[ 236], 50.00th=[ 271], 60.00th=[ 330], 00:26:57.948 | 70.00th=[ 359], 80.00th=[ 380], 90.00th=[ 414], 95.00th=[ 443], 00:26:57.948 | 99.00th=[ 535], 99.50th=[ 567], 99.90th=[ 592], 99.95th=[ 592], 00:26:57.948 | 99.99th=[ 592] 00:26:57.948 bw ( KiB/s): min=37376, max=193024, per=7.39%, avg=63564.80, stdev=33118.66, samples=20 00:26:57.948 iops : min= 146, max= 754, avg=248.30, stdev=129.37, samples=20 00:26:57.948 lat (msec) : 2=0.04%, 4=0.59%, 10=5.07%, 20=2.40%, 50=3.30% 00:26:57.948 lat (msec) : 100=9.43%, 250=24.90%, 500=51.41%, 750=2.87% 00:26:57.948 cpu : usr=0.82%, sys=0.90%, ctx=1473, majf=0, minf=1 00:26:57.948 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:26:57.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:57.948 issued rwts: total=0,2546,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.948 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:57.948 job4: (groupid=0, jobs=1): err= 0: pid=3030368: Mon Nov 18 11:55:23 2024 00:26:57.948 write: IOPS=365, BW=91.4MiB/s (95.8MB/s)(936MiB/10243msec); 0 zone resets 00:26:57.948 slat (usec): min=14, max=64833, avg=1768.46, stdev=5619.08 00:26:57.948 clat (usec): min=864, max=611059, avg=173216.53, stdev=138988.78 00:26:57.948 lat (usec): min=888, max=611095, avg=174984.99, stdev=140460.41 00:26:57.948 clat percentiles (msec): 00:26:57.948 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 15], 20.00th=[ 46], 00:26:57.948 | 30.00th=[ 97], 40.00th=[ 136], 50.00th=[ 144], 60.00th=[ 150], 00:26:57.948 | 70.00th=[ 178], 80.00th=[ 317], 90.00th=[ 409], 95.00th=[ 439], 00:26:57.948 | 99.00th=[ 535], 99.50th=[ 558], 99.90th=[ 592], 99.95th=[ 609], 00:26:57.948 | 99.99th=[ 609] 00:26:57.948 bw ( KiB/s): min=30720, max=290816, per=10.96%, avg=94220.75, stdev=60467.24, samples=20 00:26:57.948 iops : min= 120, max= 1136, avg=368.00, stdev=236.17, samples=20 00:26:57.948 lat (usec) : 1000=0.11% 00:26:57.948 lat (msec) : 2=0.83%, 4=1.79%, 10=5.10%, 20=5.02%, 50=8.04% 00:26:57.948 lat (msec) : 100=10.34%, 250=44.42%, 500=22.70%, 750=1.66% 00:26:57.948 cpu : usr=1.03%, sys=1.25%, ctx=2265, majf=0, minf=1 00:26:57.948 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:26:57.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:57.948 issued rwts: total=0,3744,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.948 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:57.948 job5: (groupid=0, jobs=1): err= 0: pid=3030370: Mon Nov 18 11:55:23 2024 00:26:57.948 write: IOPS=234, BW=58.5MiB/s (61.4MB/s)(596MiB/10190msec); 0 zone resets 00:26:57.948 slat (usec): min=15, max=87645, avg=3497.45, stdev=8202.72 00:26:57.948 clat (usec): min=893, max=606845, avg=269818.18, stdev=132176.23 00:26:57.948 lat (usec): min=918, max=606930, avg=273315.63, stdev=133680.70 00:26:57.948 clat percentiles (msec): 00:26:57.948 | 1.00th=[ 17], 5.00th=[ 36], 10.00th=[ 104], 20.00th=[ 144], 00:26:57.948 | 30.00th=[ 197], 40.00th=[ 232], 50.00th=[ 262], 60.00th=[ 317], 00:26:57.948 | 70.00th=[ 359], 80.00th=[ 384], 90.00th=[ 430], 95.00th=[ 502], 00:26:57.948 | 99.00th=[ 575], 99.50th=[ 600], 99.90th=[ 609], 99.95th=[ 609], 00:26:57.948 | 99.99th=[ 609] 00:26:57.948 bw ( KiB/s): min=32768, max=122613, per=6.91%, avg=59429.85, stdev=25034.55, samples=20 00:26:57.948 iops : min= 128, max= 478, avg=232.10, stdev=97.66, samples=20 00:26:57.948 lat (usec) : 1000=0.17% 00:26:57.948 lat (msec) : 2=0.63%, 4=0.08%, 20=1.17%, 50=3.98%, 100=3.35% 00:26:57.948 lat (msec) : 250=37.78%, 500=47.92%, 750=4.91% 00:26:57.948 cpu : usr=0.75%, sys=0.76%, ctx=1001, majf=0, minf=2 00:26:57.948 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:26:57.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:57.948 issued rwts: total=0,2385,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.948 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:57.948 job6: (groupid=0, jobs=1): err= 0: pid=3030371: Mon Nov 18 11:55:23 2024 00:26:57.948 write: IOPS=246, BW=61.6MiB/s (64.6MB/s)(631MiB/10242msec); 0 zone resets 00:26:57.948 slat (usec): min=16, max=37413, avg=3049.12, stdev=7474.12 00:26:57.948 clat (usec): min=1973, max=624216, avg=256460.26, stdev=127234.30 00:26:57.948 lat (msec): min=2, max=624, avg=259.51, stdev=128.96 00:26:57.948 clat percentiles (msec): 00:26:57.948 | 1.00th=[ 8], 5.00th=[ 20], 10.00th=[ 54], 20.00th=[ 126], 00:26:57.948 | 30.00th=[ 211], 40.00th=[ 239], 50.00th=[ 271], 60.00th=[ 313], 00:26:57.948 | 70.00th=[ 347], 80.00th=[ 372], 90.00th=[ 401], 95.00th=[ 426], 00:26:57.948 | 99.00th=[ 489], 99.50th=[ 558], 99.90th=[ 600], 99.95th=[ 625], 00:26:57.948 | 99.99th=[ 625] 00:26:57.948 bw ( KiB/s): min=40960, max=116736, per=7.33%, avg=62976.00, stdev=23228.76, samples=20 00:26:57.948 iops : min= 160, max= 456, avg=246.00, stdev=90.74, samples=20 00:26:57.948 lat (msec) : 2=0.04%, 4=0.12%, 10=1.58%, 20=3.49%, 50=3.80% 00:26:57.948 lat (msec) : 100=6.66%, 250=29.68%, 500=53.76%, 750=0.87% 00:26:57.948 cpu : usr=0.89%, sys=0.82%, ctx=1258, majf=0, minf=1 00:26:57.948 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:26:57.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:57.948 issued rwts: total=0,2524,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.949 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:57.949 job7: (groupid=0, jobs=1): err= 0: pid=3030372: Mon Nov 18 11:55:23 2024 00:26:57.949 write: IOPS=510, BW=128MiB/s (134MB/s)(1307MiB/10234msec); 0 zone resets 00:26:57.949 slat (usec): min=16, max=57320, avg=1413.87, stdev=3818.18 00:26:57.949 clat (usec): min=1148, max=602267, avg=123811.43, stdev=96953.21 00:26:57.949 lat (usec): min=1223, max=602307, avg=125225.30, stdev=97669.72 00:26:57.949 clat percentiles (msec): 00:26:57.949 | 1.00th=[ 6], 5.00th=[ 27], 10.00th=[ 52], 20.00th=[ 53], 00:26:57.949 | 30.00th=[ 55], 40.00th=[ 64], 50.00th=[ 95], 60.00th=[ 126], 00:26:57.949 | 70.00th=[ 142], 80.00th=[ 176], 90.00th=[ 279], 95.00th=[ 342], 00:26:57.949 | 99.00th=[ 418], 99.50th=[ 468], 99.90th=[ 584], 99.95th=[ 584], 00:26:57.949 | 99.99th=[ 600] 00:26:57.949 bw ( KiB/s): min=43520, max=304640, per=15.38%, avg=132206.40, stdev=79156.54, samples=20 00:26:57.949 iops : min= 170, max= 1190, avg=516.40, stdev=309.23, samples=20 00:26:57.949 lat (msec) : 2=0.13%, 4=0.54%, 10=1.70%, 20=1.86%, 50=3.98% 00:26:57.949 lat (msec) : 100=43.70%, 250=36.16%, 500=11.59%, 750=0.34% 00:26:57.949 cpu : usr=1.49%, sys=1.97%, ctx=2083, majf=0, minf=1 00:26:57.949 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:57.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.949 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:57.949 issued rwts: total=0,5227,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.949 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:57.949 job8: (groupid=0, jobs=1): err= 0: pid=3030373: Mon Nov 18 11:55:23 2024 00:26:57.949 write: IOPS=292, BW=73.2MiB/s (76.8MB/s)(740MiB/10107msec); 0 zone resets 00:26:57.949 slat (usec): min=19, max=121319, avg=2209.77, stdev=6589.73 00:26:57.949 clat (usec): min=1155, max=591059, avg=216150.84, stdev=112153.47 00:26:57.949 lat (usec): min=1197, max=591107, avg=218360.61, stdev=113421.75 00:26:57.949 clat percentiles (msec): 00:26:57.949 | 1.00th=[ 7], 5.00th=[ 52], 10.00th=[ 72], 20.00th=[ 123], 00:26:57.949 | 30.00th=[ 142], 40.00th=[ 159], 50.00th=[ 215], 60.00th=[ 247], 00:26:57.949 | 70.00th=[ 275], 80.00th=[ 321], 90.00th=[ 368], 95.00th=[ 401], 00:26:57.949 | 99.00th=[ 489], 99.50th=[ 506], 99.90th=[ 575], 99.95th=[ 584], 00:26:57.949 | 99.99th=[ 592] 00:26:57.949 bw ( KiB/s): min=40960, max=132096, per=8.63%, avg=74168.30, stdev=24855.32, samples=20 00:26:57.949 iops : min= 160, max= 516, avg=289.70, stdev=97.11, samples=20 00:26:57.949 lat (msec) : 2=0.51%, 4=0.44%, 10=0.10%, 20=0.10%, 50=3.58% 00:26:57.949 lat (msec) : 100=11.28%, 250=45.91%, 500=37.33%, 750=0.74% 00:26:57.949 cpu : usr=0.86%, sys=1.12%, ctx=1670, majf=0, minf=1 00:26:57.949 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:57.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.949 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:57.949 issued rwts: total=0,2960,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.949 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:57.949 job9: (groupid=0, jobs=1): err= 0: pid=3030374: Mon Nov 18 11:55:23 2024 00:26:57.949 write: IOPS=287, BW=71.9MiB/s (75.4MB/s)(727MiB/10109msec); 0 zone resets 00:26:57.949 slat (usec): min=14, max=102929, avg=2730.09, stdev=7085.53 00:26:57.949 clat (usec): min=928, max=656200, avg=219652.09, stdev=144633.42 00:26:57.949 lat (usec): min=952, max=656277, avg=222382.18, stdev=146443.69 00:26:57.949 clat percentiles (usec): 00:26:57.949 | 1.00th=[ 1958], 5.00th=[ 18220], 10.00th=[ 52167], 20.00th=[ 63701], 00:26:57.949 | 30.00th=[129500], 40.00th=[154141], 50.00th=[206570], 60.00th=[240124], 00:26:57.949 | 70.00th=[287310], 80.00th=[362808], 90.00th=[425722], 95.00th=[480248], 00:26:57.949 | 99.00th=[541066], 99.50th=[557843], 99.90th=[650118], 99.95th=[650118], 00:26:57.949 | 99.99th=[658506] 00:26:57.949 bw ( KiB/s): min=30720, max=265728, per=8.47%, avg=72840.40, stdev=53660.73, samples=20 00:26:57.949 iops : min= 120, max= 1038, avg=284.50, stdev=209.61, samples=20 00:26:57.949 lat (usec) : 1000=0.03% 00:26:57.949 lat (msec) : 2=1.03%, 4=0.55%, 10=1.65%, 20=1.96%, 50=4.44% 00:26:57.949 lat (msec) : 100=17.23%, 250=36.04%, 500=32.91%, 750=4.16% 00:26:57.949 cpu : usr=0.79%, sys=0.97%, ctx=1309, majf=0, minf=1 00:26:57.949 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:26:57.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.949 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:57.949 issued rwts: total=0,2908,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.949 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:57.949 job10: (groupid=0, jobs=1): err= 0: pid=3030375: Mon Nov 18 11:55:23 2024 00:26:57.949 write: IOPS=257, BW=64.5MiB/s (67.6MB/s)(660MiB/10239msec); 0 zone resets 00:26:57.949 slat (usec): min=16, max=59695, avg=2568.69, stdev=7032.77 00:26:57.949 clat (msec): min=2, max=662, avg=245.51, stdev=132.48 00:26:57.949 lat (msec): min=2, max=662, avg=248.08, stdev=134.02 00:26:57.949 clat percentiles (msec): 00:26:57.949 | 1.00th=[ 7], 5.00th=[ 15], 10.00th=[ 32], 20.00th=[ 127], 00:26:57.949 | 30.00th=[ 153], 40.00th=[ 228], 50.00th=[ 268], 60.00th=[ 300], 00:26:57.949 | 70.00th=[ 338], 80.00th=[ 368], 90.00th=[ 401], 95.00th=[ 426], 00:26:57.949 | 99.00th=[ 489], 99.50th=[ 550], 99.90th=[ 634], 99.95th=[ 659], 00:26:57.949 | 99.99th=[ 659] 00:26:57.949 bw ( KiB/s): min=34816, max=156672, per=7.67%, avg=65951.90, stdev=30602.19, samples=20 00:26:57.949 iops : min= 136, max= 612, avg=257.60, stdev=119.54, samples=20 00:26:57.949 lat (msec) : 4=0.19%, 10=2.95%, 20=5.80%, 50=2.95%, 100=4.55% 00:26:57.949 lat (msec) : 250=28.71%, 500=54.02%, 750=0.83% 00:26:57.949 cpu : usr=0.83%, sys=0.88%, ctx=1519, majf=0, minf=1 00:26:57.949 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:26:57.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.949 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:57.949 issued rwts: total=0,2640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.949 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:57.949 00:26:57.949 Run status group 0 (all jobs): 00:26:57.949 WRITE: bw=839MiB/s (880MB/s), 56.7MiB/s-128MiB/s (59.4MB/s-134MB/s), io=8599MiB (9016MB), run=10058-10243msec 00:26:57.949 00:26:57.949 Disk stats (read/write): 00:26:57.949 nvme0n1: ios=51/4978, merge=0/0, ticks=1187/1247556, in_queue=1248743, util=99.96% 00:26:57.949 nvme10n1: ios=52/9067, merge=0/0, ticks=1055/1224732, in_queue=1225787, util=100.00% 00:26:57.949 nvme1n1: ios=0/4605, merge=0/0, ticks=0/1236387, in_queue=1236387, util=97.65% 00:26:57.949 nvme2n1: ios=37/5055, merge=0/0, ticks=1445/1236952, in_queue=1238397, util=100.00% 00:26:57.949 nvme3n1: ios=15/7439, merge=0/0, ticks=105/1243383, in_queue=1243488, util=97.97% 00:26:57.949 nvme4n1: ios=0/4760, merge=0/0, ticks=0/1242783, in_queue=1242783, util=98.27% 00:26:57.949 nvme5n1: ios=33/5002, merge=0/0, ticks=1386/1239045, in_queue=1240431, util=99.90% 00:26:57.949 nvme6n1: ios=0/10417, merge=0/0, ticks=0/1242508, in_queue=1242508, util=98.53% 00:26:57.949 nvme7n1: ios=42/5737, merge=0/0, ticks=3518/1216914, in_queue=1220432, util=100.00% 00:26:57.949 nvme8n1: ios=0/5628, merge=0/0, ticks=0/1215213, in_queue=1215213, util=99.03% 00:26:57.949 nvme9n1: ios=0/5237, merge=0/0, ticks=0/1244075, in_queue=1244075, util=99.17% 00:26:57.949 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:57.949 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:57.949 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:57.949 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:57.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:57.949 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:57.949 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:57.949 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:57.949 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:26:57.949 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:57.949 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:26:57.949 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:57.949 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:57.949 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.949 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:57.949 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.949 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:57.949 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:58.248 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:58.248 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:58.248 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:58.248 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:58.248 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:26:58.248 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:58.248 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:26:58.248 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:58.248 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:58.248 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.248 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:58.248 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.248 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:58.248 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:58.530 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:58.530 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:58.530 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:58.530 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:58.530 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:26:58.530 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:58.530 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:26:58.530 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:58.530 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:58.530 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.530 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:58.530 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.530 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:58.530 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:58.789 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:58.789 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:58.789 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:58.789 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:58.789 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:26:58.789 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:58.789 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:26:58.789 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:58.789 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:58.789 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.789 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:58.789 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.789 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:58.789 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:59.047 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:59.047 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:59.047 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:59.047 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:59.047 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:26:59.047 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:59.047 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:26:59.306 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:59.307 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:59.307 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.307 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:59.307 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.307 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:59.307 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:59.567 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:59.567 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:59.567 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:59.567 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:59.567 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:26:59.567 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:59.567 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:26:59.567 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:59.567 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:59.567 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.567 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:59.567 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.567 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:59.567 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:59.827 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:59.827 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:59.827 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:59.827 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:59.827 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:26:59.827 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:59.827 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:26:59.827 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:59.827 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:59.827 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.827 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:59.827 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.827 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:59.827 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:27:00.086 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:27:00.086 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:27:00.086 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:00.086 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:00.086 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:27:00.086 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:00.086 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:27:00.086 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:00.086 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:27:00.086 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.086 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:00.086 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.086 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:00.086 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:27:00.345 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:27:00.345 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:27:00.345 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:00.345 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:00.345 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:27:00.345 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:00.345 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:27:00.345 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:00.345 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:27:00.345 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.345 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:00.345 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.345 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:00.345 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:27:00.604 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:27:00.604 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:27:00.604 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:00.604 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:00.604 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:27:00.604 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:00.604 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:27:00.604 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:00.604 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:27:00.604 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.604 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:00.604 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.604 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:00.604 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:27:00.862 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:27:00.863 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:27:00.863 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:00.863 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:00.863 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:27:00.863 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:00.863 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:27:00.863 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:00.863 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:27:00.863 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.863 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:00.863 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.863 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:27:00.863 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:27:00.863 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:27:00.863 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:00.863 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:27:00.863 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:00.863 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:27:00.863 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:00.863 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:00.863 rmmod nvme_tcp 00:27:00.863 rmmod nvme_fabrics 00:27:00.863 rmmod nvme_keyring 00:27:00.863 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:00.863 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:27:00.863 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:27:00.863 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 3025213 ']' 00:27:00.863 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 3025213 00:27:00.863 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 3025213 ']' 00:27:00.863 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 3025213 00:27:00.863 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:27:00.863 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:00.863 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3025213 00:27:00.863 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:00.863 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:00.863 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3025213' 00:27:00.863 killing process with pid 3025213 00:27:00.863 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 3025213 00:27:00.863 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 3025213 00:27:04.152 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:04.152 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:04.152 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:04.152 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:27:04.152 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:27:04.152 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:04.152 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:27:04.152 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:04.152 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:04.152 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.152 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:04.152 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:06.059 00:27:06.059 real 1m5.956s 00:27:06.059 user 3m51.073s 00:27:06.059 sys 0m16.746s 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:06.059 ************************************ 00:27:06.059 END TEST nvmf_multiconnection 00:27:06.059 ************************************ 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:06.059 ************************************ 00:27:06.059 START TEST nvmf_initiator_timeout 00:27:06.059 ************************************ 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:06.059 * Looking for test storage... 00:27:06.059 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:06.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.059 --rc genhtml_branch_coverage=1 00:27:06.059 --rc genhtml_function_coverage=1 00:27:06.059 --rc genhtml_legend=1 00:27:06.059 --rc geninfo_all_blocks=1 00:27:06.059 --rc geninfo_unexecuted_blocks=1 00:27:06.059 00:27:06.059 ' 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:06.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.059 --rc genhtml_branch_coverage=1 00:27:06.059 --rc genhtml_function_coverage=1 00:27:06.059 --rc genhtml_legend=1 00:27:06.059 --rc geninfo_all_blocks=1 00:27:06.059 --rc geninfo_unexecuted_blocks=1 00:27:06.059 00:27:06.059 ' 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:06.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.059 --rc genhtml_branch_coverage=1 00:27:06.059 --rc genhtml_function_coverage=1 00:27:06.059 --rc genhtml_legend=1 00:27:06.059 --rc geninfo_all_blocks=1 00:27:06.059 --rc geninfo_unexecuted_blocks=1 00:27:06.059 00:27:06.059 ' 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:06.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.059 --rc genhtml_branch_coverage=1 00:27:06.059 --rc genhtml_function_coverage=1 00:27:06.059 --rc genhtml_legend=1 00:27:06.059 --rc geninfo_all_blocks=1 00:27:06.059 --rc geninfo_unexecuted_blocks=1 00:27:06.059 00:27:06.059 ' 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:06.059 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:06.060 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.060 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.060 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.060 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:27:06.060 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.060 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:27:06.060 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:06.060 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:06.060 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:06.060 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:06.060 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:06.060 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:06.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:06.060 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:06.060 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:06.060 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:06.060 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:06.060 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:06.060 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:27:06.060 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:06.060 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:06.060 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:06.060 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:06.060 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:06.060 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.060 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:06.060 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.060 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:06.060 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:06.060 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:27:06.060 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:08.598 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:08.598 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:08.598 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:08.598 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:08.598 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:08.598 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:08.599 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:08.599 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:08.599 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:08.599 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:08.599 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:08.599 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:08.599 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:08.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:08.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:27:08.599 00:27:08.599 --- 10.0.0.2 ping statistics --- 00:27:08.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:08.599 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:27:08.599 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:08.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:08.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:27:08.599 00:27:08.599 --- 10.0.0.1 ping statistics --- 00:27:08.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:08.599 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:27:08.599 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:08.599 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:27:08.599 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:08.599 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:08.599 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:08.599 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:08.599 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:08.599 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:08.599 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:08.599 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:27:08.599 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:08.599 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:08.599 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:08.599 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=3033967 00:27:08.599 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:08.599 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 3033967 00:27:08.599 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 3033967 ']' 00:27:08.599 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:08.599 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:08.599 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:08.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:08.599 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:08.599 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:08.599 [2024-11-18 11:55:34.363333] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:08.599 [2024-11-18 11:55:34.363503] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:08.857 [2024-11-18 11:55:34.523210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:08.857 [2024-11-18 11:55:34.653105] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:08.857 [2024-11-18 11:55:34.653181] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:08.857 [2024-11-18 11:55:34.653202] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:08.857 [2024-11-18 11:55:34.653221] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:08.857 [2024-11-18 11:55:34.653236] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:08.857 [2024-11-18 11:55:34.656004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:08.857 [2024-11-18 11:55:34.656068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:08.857 [2024-11-18 11:55:34.656114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.857 [2024-11-18 11:55:34.656121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:09.426 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:09.426 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:27:09.426 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:09.426 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:09.426 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:09.686 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:09.686 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:09.686 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:09.686 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.686 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:09.686 Malloc0 00:27:09.686 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.686 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:27:09.686 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.686 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:09.686 Delay0 00:27:09.686 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.686 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:09.686 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.686 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:09.686 [2024-11-18 11:55:35.432347] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:09.686 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.686 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:09.686 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.686 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:09.686 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.686 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:09.686 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.686 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:09.686 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.686 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:09.686 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.686 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:09.686 [2024-11-18 11:55:35.461913] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:09.686 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.686 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:10.255 11:55:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:27:10.255 11:55:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:27:10.255 11:55:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:10.255 11:55:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:10.255 11:55:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:27:12.789 11:55:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:12.789 11:55:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:12.789 11:55:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:27:12.789 11:55:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:12.789 11:55:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:12.789 11:55:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:27:12.789 11:55:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3034401 00:27:12.789 11:55:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:12.789 11:55:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:12.789 [global] 00:27:12.789 thread=1 00:27:12.789 invalidate=1 00:27:12.789 rw=write 00:27:12.789 time_based=1 00:27:12.789 runtime=60 00:27:12.789 ioengine=libaio 00:27:12.789 direct=1 00:27:12.789 bs=4096 00:27:12.789 iodepth=1 00:27:12.789 norandommap=0 00:27:12.789 numjobs=1 00:27:12.789 00:27:12.789 verify_dump=1 00:27:12.789 verify_backlog=512 00:27:12.789 verify_state_save=0 00:27:12.789 do_verify=1 00:27:12.789 verify=crc32c-intel 00:27:12.789 [job0] 00:27:12.789 filename=/dev/nvme0n1 00:27:12.789 Could not set queue depth (nvme0n1) 00:27:12.789 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:12.789 fio-3.35 00:27:12.789 Starting 1 thread 00:27:15.326 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:15.326 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.326 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:15.326 true 00:27:15.326 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.326 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:15.326 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.327 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:15.327 true 00:27:15.327 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.327 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:15.327 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.327 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:15.327 true 00:27:15.327 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.327 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:15.327 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.327 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:15.327 true 00:27:15.327 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.327 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:18.615 11:55:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:18.615 11:55:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.616 11:55:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:18.616 true 00:27:18.616 11:55:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.616 11:55:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:18.616 11:55:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.616 11:55:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:18.616 true 00:27:18.616 11:55:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.616 11:55:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:18.616 11:55:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.616 11:55:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:18.616 true 00:27:18.616 11:55:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.616 11:55:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:18.616 11:55:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.616 11:55:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:18.616 true 00:27:18.616 11:55:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.616 11:55:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:18.616 11:55:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3034401 00:28:14.837 00:28:14.837 job0: (groupid=0, jobs=1): err= 0: pid=3034537: Mon Nov 18 11:56:38 2024 00:28:14.837 read: IOPS=58, BW=234KiB/s (239kB/s)(13.7MiB/60026msec) 00:28:14.837 slat (nsec): min=4214, max=52207, avg=11374.68, stdev=7426.48 00:28:14.837 clat (usec): min=253, max=40870k, avg=16783.70, stdev=689978.08 00:28:14.837 lat (usec): min=259, max=40870k, avg=16795.08, stdev=689978.07 00:28:14.837 clat percentiles (usec): 00:28:14.837 | 1.00th=[ 265], 5.00th=[ 273], 10.00th=[ 281], 00:28:14.837 | 20.00th=[ 289], 30.00th=[ 297], 40.00th=[ 310], 00:28:14.837 | 50.00th=[ 326], 60.00th=[ 363], 70.00th=[ 388], 00:28:14.837 | 80.00th=[ 404], 90.00th=[ 41157], 95.00th=[ 41157], 00:28:14.837 | 99.00th=[ 41681], 99.50th=[ 42206], 99.90th=[ 42206], 00:28:14.837 | 99.95th=[ 42206], 99.99th=[17112761] 00:28:14.837 write: IOPS=59, BW=239KiB/s (245kB/s)(14.0MiB/60026msec); 0 zone resets 00:28:14.837 slat (usec): min=5, max=8743, avg=19.63, stdev=186.25 00:28:14.837 clat (usec): min=201, max=924, avg=277.86, stdev=60.50 00:28:14.837 lat (usec): min=209, max=9036, avg=297.49, stdev=197.32 00:28:14.837 clat percentiles (usec): 00:28:14.837 | 1.00th=[ 210], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 231], 00:28:14.837 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 258], 60.00th=[ 269], 00:28:14.837 | 70.00th=[ 289], 80.00th=[ 326], 90.00th=[ 375], 95.00th=[ 396], 00:28:14.837 | 99.00th=[ 465], 99.50th=[ 486], 99.90th=[ 537], 99.95th=[ 644], 00:28:14.837 | 99.99th=[ 922] 00:28:14.837 bw ( KiB/s): min= 1424, max= 6344, per=100.00%, avg=3584.00, stdev=1933.39, samples=8 00:28:14.837 iops : min= 356, max= 1586, avg=896.00, stdev=483.35, samples=8 00:28:14.837 lat (usec) : 250=21.77%, 500=72.07%, 750=0.23%, 1000=0.06% 00:28:14.837 lat (msec) : 2=0.03%, 50=5.84%, >=2000=0.01% 00:28:14.837 cpu : usr=0.09%, sys=0.20%, ctx=7095, majf=0, minf=1 00:28:14.837 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:14.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:14.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:14.837 issued rwts: total=3509,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:14.837 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:14.837 00:28:14.837 Run status group 0 (all jobs): 00:28:14.837 READ: bw=234KiB/s (239kB/s), 234KiB/s-234KiB/s (239kB/s-239kB/s), io=13.7MiB (14.4MB), run=60026-60026msec 00:28:14.837 WRITE: bw=239KiB/s (245kB/s), 239KiB/s-239KiB/s (245kB/s-245kB/s), io=14.0MiB (14.7MB), run=60026-60026msec 00:28:14.837 00:28:14.837 Disk stats (read/write): 00:28:14.837 nvme0n1: ios=3604/3584, merge=0/0, ticks=17913/956, in_queue=18869, util=99.64% 00:28:14.837 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:14.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:14.837 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:14.837 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:28:14.837 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:14.837 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:14.837 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:14.837 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:14.837 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:28:14.837 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:14.837 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:14.837 nvmf hotplug test: fio successful as expected 00:28:14.837 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:14.837 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.837 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:14.837 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.837 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:14.837 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:14.837 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:14.838 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:14.838 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:28:14.838 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:14.838 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:28:14.838 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:14.838 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:14.838 rmmod nvme_tcp 00:28:14.838 rmmod nvme_fabrics 00:28:14.838 rmmod nvme_keyring 00:28:14.838 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:14.838 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:28:14.838 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:28:14.838 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 3033967 ']' 00:28:14.838 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 3033967 00:28:14.838 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 3033967 ']' 00:28:14.838 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 3033967 00:28:14.838 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:28:14.838 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:14.838 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3033967 00:28:14.838 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:14.838 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:14.838 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3033967' 00:28:14.838 killing process with pid 3033967 00:28:14.838 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 3033967 00:28:14.838 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 3033967 00:28:14.838 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:14.838 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:14.838 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:14.838 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:28:14.838 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:28:14.838 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:14.838 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:28:14.838 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:14.838 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:14.838 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.838 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:14.838 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.216 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:16.216 00:28:16.216 real 1m10.254s 00:28:16.216 user 4m14.899s 00:28:16.216 sys 0m7.719s 00:28:16.216 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:16.216 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:16.216 ************************************ 00:28:16.216 END TEST nvmf_initiator_timeout 00:28:16.216 ************************************ 00:28:16.216 11:56:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:28:16.216 11:56:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:28:16.216 11:56:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:28:16.216 11:56:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:28:16.216 11:56:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:18.117 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:18.117 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:18.117 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:18.117 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:18.117 ************************************ 00:28:18.117 START TEST nvmf_perf_adq 00:28:18.117 ************************************ 00:28:18.117 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:18.376 * Looking for test storage... 00:28:18.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:18.376 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:18.376 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:28:18.376 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:18.376 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:18.376 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:18.376 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:18.376 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:18.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.377 --rc genhtml_branch_coverage=1 00:28:18.377 --rc genhtml_function_coverage=1 00:28:18.377 --rc genhtml_legend=1 00:28:18.377 --rc geninfo_all_blocks=1 00:28:18.377 --rc geninfo_unexecuted_blocks=1 00:28:18.377 00:28:18.377 ' 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:18.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.377 --rc genhtml_branch_coverage=1 00:28:18.377 --rc genhtml_function_coverage=1 00:28:18.377 --rc genhtml_legend=1 00:28:18.377 --rc geninfo_all_blocks=1 00:28:18.377 --rc geninfo_unexecuted_blocks=1 00:28:18.377 00:28:18.377 ' 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:18.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.377 --rc genhtml_branch_coverage=1 00:28:18.377 --rc genhtml_function_coverage=1 00:28:18.377 --rc genhtml_legend=1 00:28:18.377 --rc geninfo_all_blocks=1 00:28:18.377 --rc geninfo_unexecuted_blocks=1 00:28:18.377 00:28:18.377 ' 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:18.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.377 --rc genhtml_branch_coverage=1 00:28:18.377 --rc genhtml_function_coverage=1 00:28:18.377 --rc genhtml_legend=1 00:28:18.377 --rc geninfo_all_blocks=1 00:28:18.377 --rc geninfo_unexecuted_blocks=1 00:28:18.377 00:28:18.377 ' 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:18.377 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:18.377 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:20.321 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:20.321 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:20.321 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:20.321 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:20.321 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:20.321 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:20.321 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:20.321 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:20.321 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:20.321 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:20.321 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:20.321 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:20.321 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:20.321 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:20.321 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:20.321 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:20.321 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:20.321 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:20.321 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:20.321 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:20.321 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:20.321 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:20.321 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:20.321 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:20.321 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:20.321 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:20.321 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:20.321 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:20.321 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:20.321 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:20.321 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:20.321 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:20.321 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:20.321 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:20.321 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:20.321 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:20.322 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:20.322 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:20.322 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:20.322 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:20.911 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:23.457 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:28.790 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:28.790 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:28.790 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:28.790 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:28.790 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:28.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:28.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.330 ms 00:28:28.791 00:28:28.791 --- 10.0.0.2 ping statistics --- 00:28:28.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.791 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:28.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:28.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:28:28.791 00:28:28.791 --- 10.0.0.1 ping statistics --- 00:28:28.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.791 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3046235 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3046235 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3046235 ']' 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:28.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:28.791 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:28.791 [2024-11-18 11:56:54.553971] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:28:28.791 [2024-11-18 11:56:54.554112] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:29.051 [2024-11-18 11:56:54.698205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:29.051 [2024-11-18 11:56:54.822826] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:29.051 [2024-11-18 11:56:54.822902] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:29.051 [2024-11-18 11:56:54.822923] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:29.051 [2024-11-18 11:56:54.822944] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:29.051 [2024-11-18 11:56:54.822960] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:29.051 [2024-11-18 11:56:54.825512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:29.051 [2024-11-18 11:56:54.825571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:29.051 [2024-11-18 11:56:54.825614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.051 [2024-11-18 11:56:54.825620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:29.619 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:29.619 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:29.619 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:29.619 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:29.619 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:29.879 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:29.879 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:28:29.879 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:29.879 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:29.879 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.879 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:29.879 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.879 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:29.879 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:29.879 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.879 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:29.879 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.879 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:29.879 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.879 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.138 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.138 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:30.138 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.138 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.138 [2024-11-18 11:56:55.932047] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:30.138 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.138 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:30.138 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.138 11:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.395 Malloc1 00:28:30.395 11:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.396 11:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:30.396 11:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.396 11:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.396 11:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.396 11:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:30.396 11:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.396 11:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.396 11:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.396 11:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:30.396 11:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.396 11:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.396 [2024-11-18 11:56:56.054752] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:30.396 11:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.396 11:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3046393 00:28:30.396 11:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:28:30.396 11:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:32.295 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:28:32.295 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.295 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:32.295 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.295 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:28:32.295 "tick_rate": 2700000000, 00:28:32.295 "poll_groups": [ 00:28:32.295 { 00:28:32.295 "name": "nvmf_tgt_poll_group_000", 00:28:32.295 "admin_qpairs": 1, 00:28:32.295 "io_qpairs": 1, 00:28:32.295 "current_admin_qpairs": 1, 00:28:32.295 "current_io_qpairs": 1, 00:28:32.295 "pending_bdev_io": 0, 00:28:32.295 "completed_nvme_io": 16755, 00:28:32.295 "transports": [ 00:28:32.295 { 00:28:32.295 "trtype": "TCP" 00:28:32.295 } 00:28:32.295 ] 00:28:32.295 }, 00:28:32.295 { 00:28:32.295 "name": "nvmf_tgt_poll_group_001", 00:28:32.295 "admin_qpairs": 0, 00:28:32.295 "io_qpairs": 1, 00:28:32.295 "current_admin_qpairs": 0, 00:28:32.295 "current_io_qpairs": 1, 00:28:32.295 "pending_bdev_io": 0, 00:28:32.295 "completed_nvme_io": 16425, 00:28:32.295 "transports": [ 00:28:32.295 { 00:28:32.295 "trtype": "TCP" 00:28:32.295 } 00:28:32.295 ] 00:28:32.295 }, 00:28:32.295 { 00:28:32.295 "name": "nvmf_tgt_poll_group_002", 00:28:32.295 "admin_qpairs": 0, 00:28:32.295 "io_qpairs": 1, 00:28:32.295 "current_admin_qpairs": 0, 00:28:32.295 "current_io_qpairs": 1, 00:28:32.295 "pending_bdev_io": 0, 00:28:32.295 "completed_nvme_io": 16515, 00:28:32.295 "transports": [ 00:28:32.295 { 00:28:32.295 "trtype": "TCP" 00:28:32.295 } 00:28:32.295 ] 00:28:32.295 }, 00:28:32.295 { 00:28:32.295 "name": "nvmf_tgt_poll_group_003", 00:28:32.295 "admin_qpairs": 0, 00:28:32.295 "io_qpairs": 1, 00:28:32.295 "current_admin_qpairs": 0, 00:28:32.295 "current_io_qpairs": 1, 00:28:32.295 "pending_bdev_io": 0, 00:28:32.295 "completed_nvme_io": 17312, 00:28:32.295 "transports": [ 00:28:32.295 { 00:28:32.295 "trtype": "TCP" 00:28:32.295 } 00:28:32.295 ] 00:28:32.295 } 00:28:32.295 ] 00:28:32.295 }' 00:28:32.295 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:32.295 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:28:32.295 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:28:32.295 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:28:32.295 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3046393 00:28:40.410 Initializing NVMe Controllers 00:28:40.410 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:40.410 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:40.410 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:40.410 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:40.410 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:40.410 Initialization complete. Launching workers. 00:28:40.410 ======================================================== 00:28:40.410 Latency(us) 00:28:40.410 Device Information : IOPS MiB/s Average min max 00:28:40.410 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8854.81 34.59 7243.49 2681.48 45143.52 00:28:40.410 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8603.53 33.61 7439.55 2659.63 13044.46 00:28:40.410 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8793.22 34.35 7278.85 2749.25 11508.93 00:28:40.410 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9163.69 35.80 6983.93 3189.13 13183.20 00:28:40.410 ======================================================== 00:28:40.410 Total : 35415.25 138.34 7232.74 2659.63 45143.52 00:28:40.410 00:28:40.410 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:40.410 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:40.410 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:40.411 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:40.411 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:40.411 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:40.411 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:40.411 rmmod nvme_tcp 00:28:40.669 rmmod nvme_fabrics 00:28:40.669 rmmod nvme_keyring 00:28:40.669 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:40.669 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:40.669 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:40.669 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3046235 ']' 00:28:40.669 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3046235 00:28:40.669 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3046235 ']' 00:28:40.669 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3046235 00:28:40.669 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:40.669 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:40.669 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3046235 00:28:40.669 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:40.669 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:40.669 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3046235' 00:28:40.669 killing process with pid 3046235 00:28:40.669 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3046235 00:28:40.669 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3046235 00:28:42.048 11:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:42.048 11:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:42.048 11:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:42.048 11:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:42.048 11:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:42.048 11:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:42.048 11:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:42.048 11:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:42.048 11:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:42.048 11:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:42.048 11:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:42.048 11:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.953 11:57:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:43.953 11:57:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:43.953 11:57:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:43.953 11:57:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:44.893 11:57:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:47.436 11:57:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:52.788 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:52.788 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:52.788 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:52.788 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:52.788 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:52.788 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:52.788 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:52.788 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:52.788 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.788 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:52.788 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:52.788 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:52.788 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:52.788 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:52.788 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:52.789 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:52.789 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:52.789 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:52.789 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:52.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:52.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:28:52.789 00:28:52.789 --- 10.0.0.2 ping statistics --- 00:28:52.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.789 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:52.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:52.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:28:52.789 00:28:52.789 --- 10.0.0.1 ping statistics --- 00:28:52.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.789 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:52.789 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:52.790 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:52.790 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:52.790 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:52.790 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:52.790 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:52.790 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:52.790 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:52.790 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:52.790 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:52.790 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:52.790 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:52.790 net.core.busy_poll = 1 00:28:52.790 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:52.790 net.core.busy_read = 1 00:28:52.790 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:52.790 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:52.790 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:52.790 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:52.790 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:52.790 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:52.790 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:52.790 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:52.790 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:52.790 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3049895 00:28:52.790 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:52.790 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3049895 00:28:52.790 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3049895 ']' 00:28:52.790 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:52.790 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:52.790 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:52.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:52.790 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:52.790 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:52.790 [2024-11-18 11:57:18.308399] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:28:52.790 [2024-11-18 11:57:18.308574] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:52.790 [2024-11-18 11:57:18.455552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:52.790 [2024-11-18 11:57:18.581410] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:52.790 [2024-11-18 11:57:18.581506] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:52.790 [2024-11-18 11:57:18.581531] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:52.790 [2024-11-18 11:57:18.581567] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:52.790 [2024-11-18 11:57:18.581585] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:52.790 [2024-11-18 11:57:18.584221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:52.790 [2024-11-18 11:57:18.584262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:52.790 [2024-11-18 11:57:18.584307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.790 [2024-11-18 11:57:18.584327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:53.729 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:53.729 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:53.729 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:53.729 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:53.729 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:53.729 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:53.729 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:53.729 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:53.729 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:53.729 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.729 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:53.729 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.729 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:53.729 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:53.729 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.729 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:53.729 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.729 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:53.729 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.729 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:53.988 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.988 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:53.988 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.988 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:53.988 [2024-11-18 11:57:19.743178] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:53.988 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.988 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:53.988 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.988 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:53.988 Malloc1 00:28:53.988 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.988 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:53.988 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.988 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:53.988 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.988 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:53.988 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.988 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:53.988 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.988 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:53.988 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.988 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:53.988 [2024-11-18 11:57:19.858737] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:53.988 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.988 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3050065 00:28:53.988 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:53.988 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:56.524 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:56.524 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.524 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:56.524 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.524 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:56.524 "tick_rate": 2700000000, 00:28:56.524 "poll_groups": [ 00:28:56.524 { 00:28:56.524 "name": "nvmf_tgt_poll_group_000", 00:28:56.524 "admin_qpairs": 1, 00:28:56.524 "io_qpairs": 2, 00:28:56.524 "current_admin_qpairs": 1, 00:28:56.524 "current_io_qpairs": 2, 00:28:56.524 "pending_bdev_io": 0, 00:28:56.524 "completed_nvme_io": 19716, 00:28:56.524 "transports": [ 00:28:56.524 { 00:28:56.524 "trtype": "TCP" 00:28:56.524 } 00:28:56.524 ] 00:28:56.524 }, 00:28:56.524 { 00:28:56.524 "name": "nvmf_tgt_poll_group_001", 00:28:56.524 "admin_qpairs": 0, 00:28:56.524 "io_qpairs": 2, 00:28:56.524 "current_admin_qpairs": 0, 00:28:56.524 "current_io_qpairs": 2, 00:28:56.524 "pending_bdev_io": 0, 00:28:56.524 "completed_nvme_io": 19523, 00:28:56.524 "transports": [ 00:28:56.524 { 00:28:56.524 "trtype": "TCP" 00:28:56.524 } 00:28:56.524 ] 00:28:56.524 }, 00:28:56.524 { 00:28:56.524 "name": "nvmf_tgt_poll_group_002", 00:28:56.524 "admin_qpairs": 0, 00:28:56.524 "io_qpairs": 0, 00:28:56.524 "current_admin_qpairs": 0, 00:28:56.524 "current_io_qpairs": 0, 00:28:56.524 "pending_bdev_io": 0, 00:28:56.524 "completed_nvme_io": 0, 00:28:56.524 "transports": [ 00:28:56.524 { 00:28:56.524 "trtype": "TCP" 00:28:56.524 } 00:28:56.524 ] 00:28:56.524 }, 00:28:56.524 { 00:28:56.524 "name": "nvmf_tgt_poll_group_003", 00:28:56.524 "admin_qpairs": 0, 00:28:56.524 "io_qpairs": 0, 00:28:56.524 "current_admin_qpairs": 0, 00:28:56.524 "current_io_qpairs": 0, 00:28:56.524 "pending_bdev_io": 0, 00:28:56.524 "completed_nvme_io": 0, 00:28:56.524 "transports": [ 00:28:56.524 { 00:28:56.524 "trtype": "TCP" 00:28:56.524 } 00:28:56.524 ] 00:28:56.524 } 00:28:56.524 ] 00:28:56.524 }' 00:28:56.524 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:56.524 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:56.524 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:28:56.524 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:28:56.524 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3050065 00:29:04.644 Initializing NVMe Controllers 00:29:04.644 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:04.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:04.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:04.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:04.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:04.644 Initialization complete. Launching workers. 00:29:04.644 ======================================================== 00:29:04.644 Latency(us) 00:29:04.644 Device Information : IOPS MiB/s Average min max 00:29:04.644 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5680.90 22.19 11293.35 1914.50 59107.67 00:29:04.644 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5696.80 22.25 11242.84 2079.40 58536.93 00:29:04.644 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4751.60 18.56 13477.48 2719.76 58076.91 00:29:04.644 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4901.00 19.14 13067.73 2444.65 57955.78 00:29:04.644 ======================================================== 00:29:04.644 Total : 21030.30 82.15 12186.66 1914.50 59107.67 00:29:04.644 00:29:04.644 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:29:04.644 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:04.644 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:29:04.644 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:04.644 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:29:04.644 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:04.644 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:04.644 rmmod nvme_tcp 00:29:04.644 rmmod nvme_fabrics 00:29:04.644 rmmod nvme_keyring 00:29:04.644 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:04.644 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:29:04.644 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:29:04.644 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3049895 ']' 00:29:04.644 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3049895 00:29:04.644 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3049895 ']' 00:29:04.644 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3049895 00:29:04.644 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:29:04.644 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:04.644 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3049895 00:29:04.644 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:04.644 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:04.644 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3049895' 00:29:04.644 killing process with pid 3049895 00:29:04.644 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3049895 00:29:04.644 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3049895 00:29:05.581 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:05.581 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:05.581 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:05.581 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:29:05.839 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:29:05.839 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:05.839 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:29:05.839 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:05.839 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:05.839 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.839 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:05.839 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.743 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:07.743 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:29:07.743 00:29:07.743 real 0m49.548s 00:29:07.743 user 2m52.282s 00:29:07.743 sys 0m10.476s 00:29:07.743 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:07.743 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:07.743 ************************************ 00:29:07.743 END TEST nvmf_perf_adq 00:29:07.743 ************************************ 00:29:07.743 11:57:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:07.743 11:57:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:07.743 11:57:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:07.743 11:57:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:07.743 ************************************ 00:29:07.743 START TEST nvmf_shutdown 00:29:07.743 ************************************ 00:29:07.743 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:07.743 * Looking for test storage... 00:29:07.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:07.743 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:07.743 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:29:07.743 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:08.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.002 --rc genhtml_branch_coverage=1 00:29:08.002 --rc genhtml_function_coverage=1 00:29:08.002 --rc genhtml_legend=1 00:29:08.002 --rc geninfo_all_blocks=1 00:29:08.002 --rc geninfo_unexecuted_blocks=1 00:29:08.002 00:29:08.002 ' 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:08.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.002 --rc genhtml_branch_coverage=1 00:29:08.002 --rc genhtml_function_coverage=1 00:29:08.002 --rc genhtml_legend=1 00:29:08.002 --rc geninfo_all_blocks=1 00:29:08.002 --rc geninfo_unexecuted_blocks=1 00:29:08.002 00:29:08.002 ' 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:08.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.002 --rc genhtml_branch_coverage=1 00:29:08.002 --rc genhtml_function_coverage=1 00:29:08.002 --rc genhtml_legend=1 00:29:08.002 --rc geninfo_all_blocks=1 00:29:08.002 --rc geninfo_unexecuted_blocks=1 00:29:08.002 00:29:08.002 ' 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:08.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.002 --rc genhtml_branch_coverage=1 00:29:08.002 --rc genhtml_function_coverage=1 00:29:08.002 --rc genhtml_legend=1 00:29:08.002 --rc geninfo_all_blocks=1 00:29:08.002 --rc geninfo_unexecuted_blocks=1 00:29:08.002 00:29:08.002 ' 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.002 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.003 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.003 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:29:08.003 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.003 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:29:08.003 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:08.003 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:08.003 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:08.003 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:08.003 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:08.003 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:08.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:08.003 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:08.003 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:08.003 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:08.003 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:08.003 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:08.003 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:29:08.003 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:08.003 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:08.003 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:08.003 ************************************ 00:29:08.003 START TEST nvmf_shutdown_tc1 00:29:08.003 ************************************ 00:29:08.003 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:29:08.003 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:29:08.003 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:08.003 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:08.003 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:08.003 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:08.003 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:08.003 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:08.003 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:08.003 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:08.003 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:08.003 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:08.003 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:08.003 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:08.003 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:10.532 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:10.532 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:10.532 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:10.532 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:10.532 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:10.532 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:10.532 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:10.532 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:29:10.532 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:10.532 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:29:10.532 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:29:10.532 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:29:10.532 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:29:10.532 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:29:10.532 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:10.532 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:10.532 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:10.532 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:10.533 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:10.533 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:10.533 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:10.533 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:10.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:10.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:29:10.533 00:29:10.533 --- 10.0.0.2 ping statistics --- 00:29:10.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.533 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:10.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:10.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:29:10.533 00:29:10.533 --- 10.0.0.1 ping statistics --- 00:29:10.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.533 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:10.533 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:10.534 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:10.534 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:10.534 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:10.534 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:10.534 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:10.534 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:10.534 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:10.534 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:10.534 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3053361 00:29:10.534 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:10.534 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3053361 00:29:10.534 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3053361 ']' 00:29:10.534 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:10.534 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:10.534 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:10.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:10.534 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:10.534 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:10.534 [2024-11-18 11:57:36.067739] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:10.534 [2024-11-18 11:57:36.067894] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:10.534 [2024-11-18 11:57:36.235932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:10.534 [2024-11-18 11:57:36.380809] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:10.534 [2024-11-18 11:57:36.380884] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:10.534 [2024-11-18 11:57:36.380910] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:10.534 [2024-11-18 11:57:36.380934] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:10.534 [2024-11-18 11:57:36.380952] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:10.534 [2024-11-18 11:57:36.383895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:10.534 [2024-11-18 11:57:36.384003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:10.534 [2024-11-18 11:57:36.384104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:10.534 [2024-11-18 11:57:36.384109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:11.471 11:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:11.471 11:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:29:11.471 11:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:11.471 11:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:11.471 11:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:11.471 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:11.471 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:11.471 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.471 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:11.471 [2024-11-18 11:57:37.019329] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:11.471 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.471 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:11.471 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:11.471 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:11.471 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:11.471 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:11.471 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:11.471 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:11.471 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:11.471 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:11.471 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:11.471 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:11.471 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:11.471 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:11.471 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:11.471 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:11.471 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:11.471 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:11.471 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:11.471 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:11.471 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:11.471 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:11.471 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:11.471 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:11.471 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:11.471 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:11.471 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:11.471 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.471 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:11.471 Malloc1 00:29:11.471 [2024-11-18 11:57:37.170184] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:11.471 Malloc2 00:29:11.471 Malloc3 00:29:11.731 Malloc4 00:29:11.731 Malloc5 00:29:11.990 Malloc6 00:29:11.990 Malloc7 00:29:11.990 Malloc8 00:29:12.249 Malloc9 00:29:12.249 Malloc10 00:29:12.249 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.249 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:12.249 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:12.249 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:12.249 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3053670 00:29:12.249 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3053670 /var/tmp/bdevperf.sock 00:29:12.249 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:12.249 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:29:12.249 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3053670 ']' 00:29:12.249 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:29:12.249 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:29:12.249 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:12.249 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.249 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:12.249 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.249 { 00:29:12.249 "params": { 00:29:12.249 "name": "Nvme$subsystem", 00:29:12.249 "trtype": "$TEST_TRANSPORT", 00:29:12.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.249 "adrfam": "ipv4", 00:29:12.249 "trsvcid": "$NVMF_PORT", 00:29:12.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.249 "hdgst": ${hdgst:-false}, 00:29:12.249 "ddgst": ${ddgst:-false} 00:29:12.249 }, 00:29:12.249 "method": "bdev_nvme_attach_controller" 00:29:12.249 } 00:29:12.249 EOF 00:29:12.249 )") 00:29:12.249 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:12.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:12.249 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:12.249 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:12.249 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:12.249 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.249 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.249 { 00:29:12.249 "params": { 00:29:12.249 "name": "Nvme$subsystem", 00:29:12.249 "trtype": "$TEST_TRANSPORT", 00:29:12.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.249 "adrfam": "ipv4", 00:29:12.249 "trsvcid": "$NVMF_PORT", 00:29:12.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.249 "hdgst": ${hdgst:-false}, 00:29:12.249 "ddgst": ${ddgst:-false} 00:29:12.249 }, 00:29:12.249 "method": "bdev_nvme_attach_controller" 00:29:12.249 } 00:29:12.249 EOF 00:29:12.249 )") 00:29:12.249 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:12.249 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.249 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.249 { 00:29:12.249 "params": { 00:29:12.249 "name": "Nvme$subsystem", 00:29:12.249 "trtype": "$TEST_TRANSPORT", 00:29:12.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.249 "adrfam": "ipv4", 00:29:12.249 "trsvcid": "$NVMF_PORT", 00:29:12.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.249 "hdgst": ${hdgst:-false}, 00:29:12.249 "ddgst": ${ddgst:-false} 00:29:12.249 }, 00:29:12.249 "method": "bdev_nvme_attach_controller" 00:29:12.249 } 00:29:12.249 EOF 00:29:12.249 )") 00:29:12.249 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:12.249 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.249 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.249 { 00:29:12.249 "params": { 00:29:12.249 "name": "Nvme$subsystem", 00:29:12.249 "trtype": "$TEST_TRANSPORT", 00:29:12.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.249 "adrfam": "ipv4", 00:29:12.249 "trsvcid": "$NVMF_PORT", 00:29:12.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.249 "hdgst": ${hdgst:-false}, 00:29:12.249 "ddgst": ${ddgst:-false} 00:29:12.249 }, 00:29:12.249 "method": "bdev_nvme_attach_controller" 00:29:12.249 } 00:29:12.249 EOF 00:29:12.249 )") 00:29:12.249 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:12.249 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.249 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.249 { 00:29:12.249 "params": { 00:29:12.249 "name": "Nvme$subsystem", 00:29:12.249 "trtype": "$TEST_TRANSPORT", 00:29:12.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.249 "adrfam": "ipv4", 00:29:12.249 "trsvcid": "$NVMF_PORT", 00:29:12.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.250 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.250 "hdgst": ${hdgst:-false}, 00:29:12.250 "ddgst": ${ddgst:-false} 00:29:12.250 }, 00:29:12.250 "method": "bdev_nvme_attach_controller" 00:29:12.250 } 00:29:12.250 EOF 00:29:12.250 )") 00:29:12.250 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:12.250 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.250 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.250 { 00:29:12.250 "params": { 00:29:12.250 "name": "Nvme$subsystem", 00:29:12.250 "trtype": "$TEST_TRANSPORT", 00:29:12.250 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.250 "adrfam": "ipv4", 00:29:12.250 "trsvcid": "$NVMF_PORT", 00:29:12.250 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.250 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.250 "hdgst": ${hdgst:-false}, 00:29:12.250 "ddgst": ${ddgst:-false} 00:29:12.250 }, 00:29:12.250 "method": "bdev_nvme_attach_controller" 00:29:12.250 } 00:29:12.250 EOF 00:29:12.250 )") 00:29:12.250 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:12.250 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.250 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.250 { 00:29:12.250 "params": { 00:29:12.250 "name": "Nvme$subsystem", 00:29:12.250 "trtype": "$TEST_TRANSPORT", 00:29:12.250 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.250 "adrfam": "ipv4", 00:29:12.250 "trsvcid": "$NVMF_PORT", 00:29:12.250 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.250 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.250 "hdgst": ${hdgst:-false}, 00:29:12.250 "ddgst": ${ddgst:-false} 00:29:12.250 }, 00:29:12.250 "method": "bdev_nvme_attach_controller" 00:29:12.250 } 00:29:12.250 EOF 00:29:12.250 )") 00:29:12.250 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:12.250 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.250 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.250 { 00:29:12.250 "params": { 00:29:12.250 "name": "Nvme$subsystem", 00:29:12.250 "trtype": "$TEST_TRANSPORT", 00:29:12.250 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.250 "adrfam": "ipv4", 00:29:12.250 "trsvcid": "$NVMF_PORT", 00:29:12.250 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.250 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.250 "hdgst": ${hdgst:-false}, 00:29:12.250 "ddgst": ${ddgst:-false} 00:29:12.250 }, 00:29:12.250 "method": "bdev_nvme_attach_controller" 00:29:12.250 } 00:29:12.250 EOF 00:29:12.250 )") 00:29:12.250 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:12.250 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.250 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.250 { 00:29:12.250 "params": { 00:29:12.250 "name": "Nvme$subsystem", 00:29:12.250 "trtype": "$TEST_TRANSPORT", 00:29:12.250 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.250 "adrfam": "ipv4", 00:29:12.250 "trsvcid": "$NVMF_PORT", 00:29:12.250 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.250 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.250 "hdgst": ${hdgst:-false}, 00:29:12.250 "ddgst": ${ddgst:-false} 00:29:12.250 }, 00:29:12.250 "method": "bdev_nvme_attach_controller" 00:29:12.250 } 00:29:12.250 EOF 00:29:12.250 )") 00:29:12.250 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:12.250 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.250 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.250 { 00:29:12.250 "params": { 00:29:12.250 "name": "Nvme$subsystem", 00:29:12.250 "trtype": "$TEST_TRANSPORT", 00:29:12.250 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.250 "adrfam": "ipv4", 00:29:12.250 "trsvcid": "$NVMF_PORT", 00:29:12.250 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.250 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.250 "hdgst": ${hdgst:-false}, 00:29:12.250 "ddgst": ${ddgst:-false} 00:29:12.250 }, 00:29:12.250 "method": "bdev_nvme_attach_controller" 00:29:12.250 } 00:29:12.250 EOF 00:29:12.250 )") 00:29:12.250 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:12.250 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:29:12.250 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:29:12.250 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:12.250 "params": { 00:29:12.250 "name": "Nvme1", 00:29:12.250 "trtype": "tcp", 00:29:12.250 "traddr": "10.0.0.2", 00:29:12.250 "adrfam": "ipv4", 00:29:12.250 "trsvcid": "4420", 00:29:12.250 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:12.250 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:12.250 "hdgst": false, 00:29:12.250 "ddgst": false 00:29:12.250 }, 00:29:12.250 "method": "bdev_nvme_attach_controller" 00:29:12.250 },{ 00:29:12.250 "params": { 00:29:12.250 "name": "Nvme2", 00:29:12.250 "trtype": "tcp", 00:29:12.250 "traddr": "10.0.0.2", 00:29:12.250 "adrfam": "ipv4", 00:29:12.250 "trsvcid": "4420", 00:29:12.250 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:12.250 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:12.250 "hdgst": false, 00:29:12.250 "ddgst": false 00:29:12.250 }, 00:29:12.250 "method": "bdev_nvme_attach_controller" 00:29:12.250 },{ 00:29:12.250 "params": { 00:29:12.250 "name": "Nvme3", 00:29:12.250 "trtype": "tcp", 00:29:12.250 "traddr": "10.0.0.2", 00:29:12.250 "adrfam": "ipv4", 00:29:12.250 "trsvcid": "4420", 00:29:12.250 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:12.250 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:12.250 "hdgst": false, 00:29:12.250 "ddgst": false 00:29:12.250 }, 00:29:12.250 "method": "bdev_nvme_attach_controller" 00:29:12.250 },{ 00:29:12.250 "params": { 00:29:12.250 "name": "Nvme4", 00:29:12.250 "trtype": "tcp", 00:29:12.250 "traddr": "10.0.0.2", 00:29:12.250 "adrfam": "ipv4", 00:29:12.250 "trsvcid": "4420", 00:29:12.250 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:12.250 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:12.250 "hdgst": false, 00:29:12.250 "ddgst": false 00:29:12.250 }, 00:29:12.250 "method": "bdev_nvme_attach_controller" 00:29:12.250 },{ 00:29:12.250 "params": { 00:29:12.250 "name": "Nvme5", 00:29:12.250 "trtype": "tcp", 00:29:12.250 "traddr": "10.0.0.2", 00:29:12.250 "adrfam": "ipv4", 00:29:12.250 "trsvcid": "4420", 00:29:12.250 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:12.250 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:12.250 "hdgst": false, 00:29:12.250 "ddgst": false 00:29:12.250 }, 00:29:12.250 "method": "bdev_nvme_attach_controller" 00:29:12.250 },{ 00:29:12.250 "params": { 00:29:12.250 "name": "Nvme6", 00:29:12.250 "trtype": "tcp", 00:29:12.250 "traddr": "10.0.0.2", 00:29:12.250 "adrfam": "ipv4", 00:29:12.250 "trsvcid": "4420", 00:29:12.250 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:12.250 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:12.250 "hdgst": false, 00:29:12.250 "ddgst": false 00:29:12.250 }, 00:29:12.250 "method": "bdev_nvme_attach_controller" 00:29:12.250 },{ 00:29:12.250 "params": { 00:29:12.250 "name": "Nvme7", 00:29:12.250 "trtype": "tcp", 00:29:12.250 "traddr": "10.0.0.2", 00:29:12.250 "adrfam": "ipv4", 00:29:12.250 "trsvcid": "4420", 00:29:12.250 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:12.250 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:12.250 "hdgst": false, 00:29:12.250 "ddgst": false 00:29:12.250 }, 00:29:12.250 "method": "bdev_nvme_attach_controller" 00:29:12.250 },{ 00:29:12.250 "params": { 00:29:12.250 "name": "Nvme8", 00:29:12.250 "trtype": "tcp", 00:29:12.250 "traddr": "10.0.0.2", 00:29:12.250 "adrfam": "ipv4", 00:29:12.250 "trsvcid": "4420", 00:29:12.250 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:12.250 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:12.250 "hdgst": false, 00:29:12.250 "ddgst": false 00:29:12.250 }, 00:29:12.250 "method": "bdev_nvme_attach_controller" 00:29:12.250 },{ 00:29:12.250 "params": { 00:29:12.250 "name": "Nvme9", 00:29:12.251 "trtype": "tcp", 00:29:12.251 "traddr": "10.0.0.2", 00:29:12.251 "adrfam": "ipv4", 00:29:12.251 "trsvcid": "4420", 00:29:12.251 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:12.251 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:12.251 "hdgst": false, 00:29:12.251 "ddgst": false 00:29:12.251 }, 00:29:12.251 "method": "bdev_nvme_attach_controller" 00:29:12.251 },{ 00:29:12.251 "params": { 00:29:12.251 "name": "Nvme10", 00:29:12.251 "trtype": "tcp", 00:29:12.251 "traddr": "10.0.0.2", 00:29:12.251 "adrfam": "ipv4", 00:29:12.251 "trsvcid": "4420", 00:29:12.251 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:12.251 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:12.251 "hdgst": false, 00:29:12.251 "ddgst": false 00:29:12.251 }, 00:29:12.251 "method": "bdev_nvme_attach_controller" 00:29:12.251 }' 00:29:12.509 [2024-11-18 11:57:38.178115] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:12.509 [2024-11-18 11:57:38.178255] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:12.509 [2024-11-18 11:57:38.322615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.768 [2024-11-18 11:57:38.452343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.299 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:15.300 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:29:15.300 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:15.300 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.300 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:15.300 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.300 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3053670 00:29:15.300 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:29:15.300 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:29:16.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3053670 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:29:16.234 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3053361 00:29:16.234 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:16.234 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:16.234 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:29:16.234 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:29:16.234 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.234 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.234 { 00:29:16.234 "params": { 00:29:16.234 "name": "Nvme$subsystem", 00:29:16.234 "trtype": "$TEST_TRANSPORT", 00:29:16.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.234 "adrfam": "ipv4", 00:29:16.234 "trsvcid": "$NVMF_PORT", 00:29:16.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.234 "hdgst": ${hdgst:-false}, 00:29:16.234 "ddgst": ${ddgst:-false} 00:29:16.234 }, 00:29:16.234 "method": "bdev_nvme_attach_controller" 00:29:16.234 } 00:29:16.234 EOF 00:29:16.234 )") 00:29:16.235 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:16.235 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.235 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.235 { 00:29:16.235 "params": { 00:29:16.235 "name": "Nvme$subsystem", 00:29:16.235 "trtype": "$TEST_TRANSPORT", 00:29:16.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.235 "adrfam": "ipv4", 00:29:16.235 "trsvcid": "$NVMF_PORT", 00:29:16.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.235 "hdgst": ${hdgst:-false}, 00:29:16.235 "ddgst": ${ddgst:-false} 00:29:16.235 }, 00:29:16.235 "method": "bdev_nvme_attach_controller" 00:29:16.235 } 00:29:16.235 EOF 00:29:16.235 )") 00:29:16.235 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:16.235 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.235 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.235 { 00:29:16.235 "params": { 00:29:16.235 "name": "Nvme$subsystem", 00:29:16.235 "trtype": "$TEST_TRANSPORT", 00:29:16.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.235 "adrfam": "ipv4", 00:29:16.235 "trsvcid": "$NVMF_PORT", 00:29:16.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.235 "hdgst": ${hdgst:-false}, 00:29:16.235 "ddgst": ${ddgst:-false} 00:29:16.235 }, 00:29:16.235 "method": "bdev_nvme_attach_controller" 00:29:16.235 } 00:29:16.235 EOF 00:29:16.235 )") 00:29:16.235 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:16.235 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.235 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.235 { 00:29:16.235 "params": { 00:29:16.235 "name": "Nvme$subsystem", 00:29:16.235 "trtype": "$TEST_TRANSPORT", 00:29:16.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.235 "adrfam": "ipv4", 00:29:16.235 "trsvcid": "$NVMF_PORT", 00:29:16.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.235 "hdgst": ${hdgst:-false}, 00:29:16.235 "ddgst": ${ddgst:-false} 00:29:16.235 }, 00:29:16.235 "method": "bdev_nvme_attach_controller" 00:29:16.235 } 00:29:16.235 EOF 00:29:16.235 )") 00:29:16.235 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:16.235 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.235 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.235 { 00:29:16.235 "params": { 00:29:16.235 "name": "Nvme$subsystem", 00:29:16.235 "trtype": "$TEST_TRANSPORT", 00:29:16.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.235 "adrfam": "ipv4", 00:29:16.235 "trsvcid": "$NVMF_PORT", 00:29:16.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.235 "hdgst": ${hdgst:-false}, 00:29:16.235 "ddgst": ${ddgst:-false} 00:29:16.235 }, 00:29:16.235 "method": "bdev_nvme_attach_controller" 00:29:16.235 } 00:29:16.235 EOF 00:29:16.235 )") 00:29:16.235 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:16.235 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.235 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.235 { 00:29:16.235 "params": { 00:29:16.235 "name": "Nvme$subsystem", 00:29:16.235 "trtype": "$TEST_TRANSPORT", 00:29:16.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.235 "adrfam": "ipv4", 00:29:16.235 "trsvcid": "$NVMF_PORT", 00:29:16.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.235 "hdgst": ${hdgst:-false}, 00:29:16.235 "ddgst": ${ddgst:-false} 00:29:16.235 }, 00:29:16.235 "method": "bdev_nvme_attach_controller" 00:29:16.235 } 00:29:16.235 EOF 00:29:16.235 )") 00:29:16.235 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:16.235 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.235 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.235 { 00:29:16.235 "params": { 00:29:16.235 "name": "Nvme$subsystem", 00:29:16.235 "trtype": "$TEST_TRANSPORT", 00:29:16.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.235 "adrfam": "ipv4", 00:29:16.235 "trsvcid": "$NVMF_PORT", 00:29:16.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.235 "hdgst": ${hdgst:-false}, 00:29:16.235 "ddgst": ${ddgst:-false} 00:29:16.235 }, 00:29:16.235 "method": "bdev_nvme_attach_controller" 00:29:16.235 } 00:29:16.235 EOF 00:29:16.235 )") 00:29:16.235 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:16.235 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.235 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.235 { 00:29:16.235 "params": { 00:29:16.235 "name": "Nvme$subsystem", 00:29:16.235 "trtype": "$TEST_TRANSPORT", 00:29:16.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.235 "adrfam": "ipv4", 00:29:16.235 "trsvcid": "$NVMF_PORT", 00:29:16.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.235 "hdgst": ${hdgst:-false}, 00:29:16.235 "ddgst": ${ddgst:-false} 00:29:16.235 }, 00:29:16.235 "method": "bdev_nvme_attach_controller" 00:29:16.235 } 00:29:16.235 EOF 00:29:16.235 )") 00:29:16.235 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:16.235 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.235 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.235 { 00:29:16.235 "params": { 00:29:16.235 "name": "Nvme$subsystem", 00:29:16.235 "trtype": "$TEST_TRANSPORT", 00:29:16.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.235 "adrfam": "ipv4", 00:29:16.235 "trsvcid": "$NVMF_PORT", 00:29:16.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.235 "hdgst": ${hdgst:-false}, 00:29:16.235 "ddgst": ${ddgst:-false} 00:29:16.235 }, 00:29:16.235 "method": "bdev_nvme_attach_controller" 00:29:16.235 } 00:29:16.235 EOF 00:29:16.235 )") 00:29:16.235 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:16.235 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.235 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.235 { 00:29:16.235 "params": { 00:29:16.235 "name": "Nvme$subsystem", 00:29:16.235 "trtype": "$TEST_TRANSPORT", 00:29:16.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.235 "adrfam": "ipv4", 00:29:16.235 "trsvcid": "$NVMF_PORT", 00:29:16.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.235 "hdgst": ${hdgst:-false}, 00:29:16.235 "ddgst": ${ddgst:-false} 00:29:16.235 }, 00:29:16.235 "method": "bdev_nvme_attach_controller" 00:29:16.235 } 00:29:16.235 EOF 00:29:16.235 )") 00:29:16.235 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:16.235 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:29:16.235 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:29:16.235 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:16.235 "params": { 00:29:16.235 "name": "Nvme1", 00:29:16.235 "trtype": "tcp", 00:29:16.235 "traddr": "10.0.0.2", 00:29:16.235 "adrfam": "ipv4", 00:29:16.235 "trsvcid": "4420", 00:29:16.235 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:16.235 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:16.235 "hdgst": false, 00:29:16.235 "ddgst": false 00:29:16.235 }, 00:29:16.235 "method": "bdev_nvme_attach_controller" 00:29:16.235 },{ 00:29:16.235 "params": { 00:29:16.235 "name": "Nvme2", 00:29:16.235 "trtype": "tcp", 00:29:16.235 "traddr": "10.0.0.2", 00:29:16.235 "adrfam": "ipv4", 00:29:16.235 "trsvcid": "4420", 00:29:16.235 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:16.235 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:16.235 "hdgst": false, 00:29:16.235 "ddgst": false 00:29:16.235 }, 00:29:16.235 "method": "bdev_nvme_attach_controller" 00:29:16.235 },{ 00:29:16.235 "params": { 00:29:16.235 "name": "Nvme3", 00:29:16.235 "trtype": "tcp", 00:29:16.235 "traddr": "10.0.0.2", 00:29:16.235 "adrfam": "ipv4", 00:29:16.236 "trsvcid": "4420", 00:29:16.236 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:16.236 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:16.236 "hdgst": false, 00:29:16.236 "ddgst": false 00:29:16.236 }, 00:29:16.236 "method": "bdev_nvme_attach_controller" 00:29:16.236 },{ 00:29:16.236 "params": { 00:29:16.236 "name": "Nvme4", 00:29:16.236 "trtype": "tcp", 00:29:16.236 "traddr": "10.0.0.2", 00:29:16.236 "adrfam": "ipv4", 00:29:16.236 "trsvcid": "4420", 00:29:16.236 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:16.236 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:16.236 "hdgst": false, 00:29:16.236 "ddgst": false 00:29:16.236 }, 00:29:16.236 "method": "bdev_nvme_attach_controller" 00:29:16.236 },{ 00:29:16.236 "params": { 00:29:16.236 "name": "Nvme5", 00:29:16.236 "trtype": "tcp", 00:29:16.236 "traddr": "10.0.0.2", 00:29:16.236 "adrfam": "ipv4", 00:29:16.236 "trsvcid": "4420", 00:29:16.236 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:16.236 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:16.236 "hdgst": false, 00:29:16.236 "ddgst": false 00:29:16.236 }, 00:29:16.236 "method": "bdev_nvme_attach_controller" 00:29:16.236 },{ 00:29:16.236 "params": { 00:29:16.236 "name": "Nvme6", 00:29:16.236 "trtype": "tcp", 00:29:16.236 "traddr": "10.0.0.2", 00:29:16.236 "adrfam": "ipv4", 00:29:16.236 "trsvcid": "4420", 00:29:16.236 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:16.236 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:16.236 "hdgst": false, 00:29:16.236 "ddgst": false 00:29:16.236 }, 00:29:16.236 "method": "bdev_nvme_attach_controller" 00:29:16.236 },{ 00:29:16.236 "params": { 00:29:16.236 "name": "Nvme7", 00:29:16.236 "trtype": "tcp", 00:29:16.236 "traddr": "10.0.0.2", 00:29:16.236 "adrfam": "ipv4", 00:29:16.236 "trsvcid": "4420", 00:29:16.236 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:16.236 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:16.236 "hdgst": false, 00:29:16.236 "ddgst": false 00:29:16.236 }, 00:29:16.236 "method": "bdev_nvme_attach_controller" 00:29:16.236 },{ 00:29:16.236 "params": { 00:29:16.236 "name": "Nvme8", 00:29:16.236 "trtype": "tcp", 00:29:16.236 "traddr": "10.0.0.2", 00:29:16.236 "adrfam": "ipv4", 00:29:16.236 "trsvcid": "4420", 00:29:16.236 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:16.236 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:16.236 "hdgst": false, 00:29:16.236 "ddgst": false 00:29:16.236 }, 00:29:16.236 "method": "bdev_nvme_attach_controller" 00:29:16.236 },{ 00:29:16.236 "params": { 00:29:16.236 "name": "Nvme9", 00:29:16.236 "trtype": "tcp", 00:29:16.236 "traddr": "10.0.0.2", 00:29:16.236 "adrfam": "ipv4", 00:29:16.236 "trsvcid": "4420", 00:29:16.236 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:16.236 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:16.236 "hdgst": false, 00:29:16.236 "ddgst": false 00:29:16.236 }, 00:29:16.236 "method": "bdev_nvme_attach_controller" 00:29:16.236 },{ 00:29:16.236 "params": { 00:29:16.236 "name": "Nvme10", 00:29:16.236 "trtype": "tcp", 00:29:16.236 "traddr": "10.0.0.2", 00:29:16.236 "adrfam": "ipv4", 00:29:16.236 "trsvcid": "4420", 00:29:16.236 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:16.236 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:16.236 "hdgst": false, 00:29:16.236 "ddgst": false 00:29:16.236 }, 00:29:16.236 "method": "bdev_nvme_attach_controller" 00:29:16.236 }' 00:29:16.236 [2024-11-18 11:57:42.011148] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:16.236 [2024-11-18 11:57:42.011282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3054140 ] 00:29:16.494 [2024-11-18 11:57:42.152528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.494 [2024-11-18 11:57:42.280660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:18.398 Running I/O for 1 seconds... 00:29:19.336 1349.00 IOPS, 84.31 MiB/s 00:29:19.336 Latency(us) 00:29:19.336 [2024-11-18T10:57:45.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.336 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.336 Verification LBA range: start 0x0 length 0x400 00:29:19.336 Nvme1n1 : 1.13 169.31 10.58 0.00 0.00 373842.43 24466.77 310689.19 00:29:19.336 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.336 Verification LBA range: start 0x0 length 0x400 00:29:19.336 Nvme2n1 : 1.15 166.94 10.43 0.00 0.00 372897.06 23398.78 318456.41 00:29:19.336 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.336 Verification LBA range: start 0x0 length 0x400 00:29:19.336 Nvme3n1 : 1.18 216.04 13.50 0.00 0.00 283295.67 27962.03 307582.29 00:29:19.336 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.336 Verification LBA range: start 0x0 length 0x400 00:29:19.336 Nvme4n1 : 1.19 214.93 13.43 0.00 0.00 279924.62 22427.88 330883.98 00:29:19.336 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.336 Verification LBA range: start 0x0 length 0x400 00:29:19.336 Nvme5n1 : 1.20 212.91 13.31 0.00 0.00 277762.65 21554.06 324670.20 00:29:19.336 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.336 Verification LBA range: start 0x0 length 0x400 00:29:19.336 Nvme6n1 : 1.20 213.62 13.35 0.00 0.00 270567.73 39418.69 285834.05 00:29:19.336 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.336 Verification LBA range: start 0x0 length 0x400 00:29:19.336 Nvme7n1 : 1.12 170.74 10.67 0.00 0.00 331566.14 22524.97 335544.32 00:29:19.336 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.336 Verification LBA range: start 0x0 length 0x400 00:29:19.336 Nvme8n1 : 1.14 167.75 10.48 0.00 0.00 331739.34 19320.98 295154.73 00:29:19.336 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.336 Verification LBA range: start 0x0 length 0x400 00:29:19.336 Nvme9n1 : 1.17 164.64 10.29 0.00 0.00 332517.33 23884.23 324670.20 00:29:19.336 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.336 Verification LBA range: start 0x0 length 0x400 00:29:19.336 Nvme10n1 : 1.19 167.38 10.46 0.00 0.00 318059.44 6602.15 347971.89 00:29:19.336 [2024-11-18T10:57:45.221Z] =================================================================================================================== 00:29:19.336 [2024-11-18T10:57:45.221Z] Total : 1864.26 116.52 0.00 0.00 312610.27 6602.15 347971.89 00:29:20.274 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:29:20.274 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:20.274 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:20.274 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:20.274 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:20.274 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:20.274 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:29:20.274 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:20.274 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:29:20.274 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:20.274 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:20.274 rmmod nvme_tcp 00:29:20.274 rmmod nvme_fabrics 00:29:20.274 rmmod nvme_keyring 00:29:20.274 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:20.274 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:29:20.274 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:29:20.274 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3053361 ']' 00:29:20.274 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3053361 00:29:20.274 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3053361 ']' 00:29:20.274 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3053361 00:29:20.274 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:29:20.532 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:20.532 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3053361 00:29:20.532 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:20.532 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:20.532 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3053361' 00:29:20.532 killing process with pid 3053361 00:29:20.532 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3053361 00:29:20.532 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3053361 00:29:23.069 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:23.069 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:23.069 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:23.069 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:29:23.069 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:29:23.069 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:23.069 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:29:23.069 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:23.069 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:23.069 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.069 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:23.069 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.603 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:25.603 00:29:25.603 real 0m17.208s 00:29:25.603 user 0m55.669s 00:29:25.603 sys 0m3.845s 00:29:25.603 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:25.603 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:25.603 ************************************ 00:29:25.603 END TEST nvmf_shutdown_tc1 00:29:25.603 ************************************ 00:29:25.603 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:29:25.603 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:25.603 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:25.603 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:25.603 ************************************ 00:29:25.603 START TEST nvmf_shutdown_tc2 00:29:25.603 ************************************ 00:29:25.603 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:29:25.603 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:29:25.603 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:25.603 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:25.603 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:25.603 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:25.603 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:25.603 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:25.603 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.603 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:25.604 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:25.604 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:25.604 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:25.604 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:25.604 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:25.605 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:25.605 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:25.605 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:25.605 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:25.605 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:25.605 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:25.605 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:25.605 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:25.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:25.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:29:25.605 00:29:25.605 --- 10.0.0.2 ping statistics --- 00:29:25.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.605 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:29:25.605 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:25.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:25.605 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:29:25.605 00:29:25.605 --- 10.0.0.1 ping statistics --- 00:29:25.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.605 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:29:25.605 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:25.605 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:29:25.605 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:25.605 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:25.605 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:25.605 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:25.605 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:25.605 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:25.605 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:25.605 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:25.605 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:25.605 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:25.605 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:25.605 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3055371 00:29:25.605 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:25.605 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3055371 00:29:25.605 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3055371 ']' 00:29:25.605 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.605 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:25.605 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.605 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:25.605 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:25.605 [2024-11-18 11:57:51.419432] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:25.605 [2024-11-18 11:57:51.419608] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:25.863 [2024-11-18 11:57:51.568045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:25.863 [2024-11-18 11:57:51.709313] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:25.863 [2024-11-18 11:57:51.709405] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:25.863 [2024-11-18 11:57:51.709431] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:25.863 [2024-11-18 11:57:51.709455] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:25.863 [2024-11-18 11:57:51.709475] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:25.863 [2024-11-18 11:57:51.712546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:25.863 [2024-11-18 11:57:51.712647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:25.863 [2024-11-18 11:57:51.712693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.863 [2024-11-18 11:57:51.712700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.797 [2024-11-18 11:57:52.423730] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.797 11:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.797 Malloc1 00:29:26.797 [2024-11-18 11:57:52.558702] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:26.797 Malloc2 00:29:27.057 Malloc3 00:29:27.057 Malloc4 00:29:27.057 Malloc5 00:29:27.317 Malloc6 00:29:27.317 Malloc7 00:29:27.577 Malloc8 00:29:27.577 Malloc9 00:29:27.577 Malloc10 00:29:27.577 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.577 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:27.577 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:27.577 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:27.835 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3055680 00:29:27.835 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3055680 /var/tmp/bdevperf.sock 00:29:27.835 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3055680 ']' 00:29:27.835 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:27.835 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:27.835 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:27.836 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:27.836 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:27.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:27.836 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:29:27.836 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:27.836 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:29:27.836 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:27.836 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:27.836 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:27.836 { 00:29:27.836 "params": { 00:29:27.836 "name": "Nvme$subsystem", 00:29:27.836 "trtype": "$TEST_TRANSPORT", 00:29:27.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:27.836 "adrfam": "ipv4", 00:29:27.836 "trsvcid": "$NVMF_PORT", 00:29:27.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:27.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:27.836 "hdgst": ${hdgst:-false}, 00:29:27.836 "ddgst": ${ddgst:-false} 00:29:27.836 }, 00:29:27.836 "method": "bdev_nvme_attach_controller" 00:29:27.836 } 00:29:27.836 EOF 00:29:27.836 )") 00:29:27.836 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:27.836 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:27.836 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:27.836 { 00:29:27.836 "params": { 00:29:27.836 "name": "Nvme$subsystem", 00:29:27.836 "trtype": "$TEST_TRANSPORT", 00:29:27.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:27.836 "adrfam": "ipv4", 00:29:27.836 "trsvcid": "$NVMF_PORT", 00:29:27.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:27.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:27.836 "hdgst": ${hdgst:-false}, 00:29:27.836 "ddgst": ${ddgst:-false} 00:29:27.836 }, 00:29:27.836 "method": "bdev_nvme_attach_controller" 00:29:27.836 } 00:29:27.836 EOF 00:29:27.836 )") 00:29:27.836 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:27.836 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:27.836 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:27.836 { 00:29:27.836 "params": { 00:29:27.836 "name": "Nvme$subsystem", 00:29:27.836 "trtype": "$TEST_TRANSPORT", 00:29:27.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:27.836 "adrfam": "ipv4", 00:29:27.836 "trsvcid": "$NVMF_PORT", 00:29:27.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:27.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:27.836 "hdgst": ${hdgst:-false}, 00:29:27.836 "ddgst": ${ddgst:-false} 00:29:27.836 }, 00:29:27.836 "method": "bdev_nvme_attach_controller" 00:29:27.836 } 00:29:27.836 EOF 00:29:27.836 )") 00:29:27.836 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:27.836 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:27.836 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:27.836 { 00:29:27.836 "params": { 00:29:27.836 "name": "Nvme$subsystem", 00:29:27.836 "trtype": "$TEST_TRANSPORT", 00:29:27.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:27.836 "adrfam": "ipv4", 00:29:27.836 "trsvcid": "$NVMF_PORT", 00:29:27.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:27.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:27.836 "hdgst": ${hdgst:-false}, 00:29:27.836 "ddgst": ${ddgst:-false} 00:29:27.836 }, 00:29:27.836 "method": "bdev_nvme_attach_controller" 00:29:27.836 } 00:29:27.836 EOF 00:29:27.836 )") 00:29:27.836 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:27.836 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:27.836 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:27.836 { 00:29:27.836 "params": { 00:29:27.836 "name": "Nvme$subsystem", 00:29:27.836 "trtype": "$TEST_TRANSPORT", 00:29:27.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:27.836 "adrfam": "ipv4", 00:29:27.836 "trsvcid": "$NVMF_PORT", 00:29:27.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:27.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:27.836 "hdgst": ${hdgst:-false}, 00:29:27.836 "ddgst": ${ddgst:-false} 00:29:27.836 }, 00:29:27.836 "method": "bdev_nvme_attach_controller" 00:29:27.836 } 00:29:27.836 EOF 00:29:27.836 )") 00:29:27.836 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:27.836 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:27.836 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:27.836 { 00:29:27.836 "params": { 00:29:27.836 "name": "Nvme$subsystem", 00:29:27.836 "trtype": "$TEST_TRANSPORT", 00:29:27.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:27.836 "adrfam": "ipv4", 00:29:27.836 "trsvcid": "$NVMF_PORT", 00:29:27.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:27.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:27.836 "hdgst": ${hdgst:-false}, 00:29:27.836 "ddgst": ${ddgst:-false} 00:29:27.836 }, 00:29:27.836 "method": "bdev_nvme_attach_controller" 00:29:27.836 } 00:29:27.836 EOF 00:29:27.836 )") 00:29:27.836 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:27.836 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:27.836 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:27.836 { 00:29:27.836 "params": { 00:29:27.836 "name": "Nvme$subsystem", 00:29:27.836 "trtype": "$TEST_TRANSPORT", 00:29:27.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:27.836 "adrfam": "ipv4", 00:29:27.836 "trsvcid": "$NVMF_PORT", 00:29:27.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:27.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:27.836 "hdgst": ${hdgst:-false}, 00:29:27.836 "ddgst": ${ddgst:-false} 00:29:27.836 }, 00:29:27.836 "method": "bdev_nvme_attach_controller" 00:29:27.836 } 00:29:27.836 EOF 00:29:27.836 )") 00:29:27.836 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:27.836 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:27.836 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:27.836 { 00:29:27.836 "params": { 00:29:27.836 "name": "Nvme$subsystem", 00:29:27.836 "trtype": "$TEST_TRANSPORT", 00:29:27.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:27.836 "adrfam": "ipv4", 00:29:27.836 "trsvcid": "$NVMF_PORT", 00:29:27.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:27.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:27.836 "hdgst": ${hdgst:-false}, 00:29:27.836 "ddgst": ${ddgst:-false} 00:29:27.836 }, 00:29:27.836 "method": "bdev_nvme_attach_controller" 00:29:27.836 } 00:29:27.836 EOF 00:29:27.836 )") 00:29:27.836 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:27.836 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:27.836 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:27.836 { 00:29:27.836 "params": { 00:29:27.836 "name": "Nvme$subsystem", 00:29:27.836 "trtype": "$TEST_TRANSPORT", 00:29:27.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:27.836 "adrfam": "ipv4", 00:29:27.836 "trsvcid": "$NVMF_PORT", 00:29:27.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:27.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:27.836 "hdgst": ${hdgst:-false}, 00:29:27.836 "ddgst": ${ddgst:-false} 00:29:27.836 }, 00:29:27.836 "method": "bdev_nvme_attach_controller" 00:29:27.836 } 00:29:27.836 EOF 00:29:27.836 )") 00:29:27.836 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:27.836 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:27.836 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:27.836 { 00:29:27.836 "params": { 00:29:27.836 "name": "Nvme$subsystem", 00:29:27.836 "trtype": "$TEST_TRANSPORT", 00:29:27.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:27.836 "adrfam": "ipv4", 00:29:27.836 "trsvcid": "$NVMF_PORT", 00:29:27.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:27.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:27.836 "hdgst": ${hdgst:-false}, 00:29:27.836 "ddgst": ${ddgst:-false} 00:29:27.836 }, 00:29:27.836 "method": "bdev_nvme_attach_controller" 00:29:27.836 } 00:29:27.836 EOF 00:29:27.836 )") 00:29:27.837 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:27.837 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:29:27.837 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:29:27.837 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:27.837 "params": { 00:29:27.837 "name": "Nvme1", 00:29:27.837 "trtype": "tcp", 00:29:27.837 "traddr": "10.0.0.2", 00:29:27.837 "adrfam": "ipv4", 00:29:27.837 "trsvcid": "4420", 00:29:27.837 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:27.837 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:27.837 "hdgst": false, 00:29:27.837 "ddgst": false 00:29:27.837 }, 00:29:27.837 "method": "bdev_nvme_attach_controller" 00:29:27.837 },{ 00:29:27.837 "params": { 00:29:27.837 "name": "Nvme2", 00:29:27.837 "trtype": "tcp", 00:29:27.837 "traddr": "10.0.0.2", 00:29:27.837 "adrfam": "ipv4", 00:29:27.837 "trsvcid": "4420", 00:29:27.837 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:27.837 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:27.837 "hdgst": false, 00:29:27.837 "ddgst": false 00:29:27.837 }, 00:29:27.837 "method": "bdev_nvme_attach_controller" 00:29:27.837 },{ 00:29:27.837 "params": { 00:29:27.837 "name": "Nvme3", 00:29:27.837 "trtype": "tcp", 00:29:27.837 "traddr": "10.0.0.2", 00:29:27.837 "adrfam": "ipv4", 00:29:27.837 "trsvcid": "4420", 00:29:27.837 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:27.837 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:27.837 "hdgst": false, 00:29:27.837 "ddgst": false 00:29:27.837 }, 00:29:27.837 "method": "bdev_nvme_attach_controller" 00:29:27.837 },{ 00:29:27.837 "params": { 00:29:27.837 "name": "Nvme4", 00:29:27.837 "trtype": "tcp", 00:29:27.837 "traddr": "10.0.0.2", 00:29:27.837 "adrfam": "ipv4", 00:29:27.837 "trsvcid": "4420", 00:29:27.837 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:27.837 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:27.837 "hdgst": false, 00:29:27.837 "ddgst": false 00:29:27.837 }, 00:29:27.837 "method": "bdev_nvme_attach_controller" 00:29:27.837 },{ 00:29:27.837 "params": { 00:29:27.837 "name": "Nvme5", 00:29:27.837 "trtype": "tcp", 00:29:27.837 "traddr": "10.0.0.2", 00:29:27.837 "adrfam": "ipv4", 00:29:27.837 "trsvcid": "4420", 00:29:27.837 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:27.837 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:27.837 "hdgst": false, 00:29:27.837 "ddgst": false 00:29:27.837 }, 00:29:27.837 "method": "bdev_nvme_attach_controller" 00:29:27.837 },{ 00:29:27.837 "params": { 00:29:27.837 "name": "Nvme6", 00:29:27.837 "trtype": "tcp", 00:29:27.837 "traddr": "10.0.0.2", 00:29:27.837 "adrfam": "ipv4", 00:29:27.837 "trsvcid": "4420", 00:29:27.837 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:27.837 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:27.837 "hdgst": false, 00:29:27.837 "ddgst": false 00:29:27.837 }, 00:29:27.837 "method": "bdev_nvme_attach_controller" 00:29:27.837 },{ 00:29:27.837 "params": { 00:29:27.837 "name": "Nvme7", 00:29:27.837 "trtype": "tcp", 00:29:27.837 "traddr": "10.0.0.2", 00:29:27.837 "adrfam": "ipv4", 00:29:27.837 "trsvcid": "4420", 00:29:27.837 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:27.837 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:27.837 "hdgst": false, 00:29:27.837 "ddgst": false 00:29:27.837 }, 00:29:27.837 "method": "bdev_nvme_attach_controller" 00:29:27.837 },{ 00:29:27.837 "params": { 00:29:27.837 "name": "Nvme8", 00:29:27.837 "trtype": "tcp", 00:29:27.837 "traddr": "10.0.0.2", 00:29:27.837 "adrfam": "ipv4", 00:29:27.837 "trsvcid": "4420", 00:29:27.837 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:27.837 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:27.837 "hdgst": false, 00:29:27.837 "ddgst": false 00:29:27.837 }, 00:29:27.837 "method": "bdev_nvme_attach_controller" 00:29:27.837 },{ 00:29:27.837 "params": { 00:29:27.837 "name": "Nvme9", 00:29:27.837 "trtype": "tcp", 00:29:27.837 "traddr": "10.0.0.2", 00:29:27.837 "adrfam": "ipv4", 00:29:27.837 "trsvcid": "4420", 00:29:27.837 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:27.837 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:27.837 "hdgst": false, 00:29:27.837 "ddgst": false 00:29:27.837 }, 00:29:27.837 "method": "bdev_nvme_attach_controller" 00:29:27.837 },{ 00:29:27.837 "params": { 00:29:27.837 "name": "Nvme10", 00:29:27.837 "trtype": "tcp", 00:29:27.837 "traddr": "10.0.0.2", 00:29:27.837 "adrfam": "ipv4", 00:29:27.837 "trsvcid": "4420", 00:29:27.837 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:27.837 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:27.837 "hdgst": false, 00:29:27.837 "ddgst": false 00:29:27.837 }, 00:29:27.837 "method": "bdev_nvme_attach_controller" 00:29:27.837 }' 00:29:27.837 [2024-11-18 11:57:53.573509] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:27.837 [2024-11-18 11:57:53.573650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3055680 ] 00:29:27.837 [2024-11-18 11:57:53.714698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.095 [2024-11-18 11:57:53.844609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.628 Running I/O for 10 seconds... 00:29:30.628 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:30.628 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:30.628 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:30.628 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.628 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:30.628 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.628 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:30.628 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:30.628 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:30.628 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:29:30.628 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:29:30.628 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:30.628 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:30.628 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:30.628 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:30.628 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.628 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:30.628 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.628 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:29:30.628 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:29:30.629 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:30.887 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:30.887 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:30.887 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:30.887 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:30.887 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.887 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:30.887 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.887 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:30.887 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:30.887 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:31.146 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:31.146 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:31.146 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:31.146 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:31.146 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.146 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.146 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.146 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:31.146 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:31.146 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:29:31.146 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:29:31.146 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:29:31.146 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3055680 00:29:31.146 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3055680 ']' 00:29:31.146 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3055680 00:29:31.146 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:31.146 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:31.146 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3055680 00:29:31.146 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:31.146 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:31.146 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3055680' 00:29:31.146 killing process with pid 3055680 00:29:31.146 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3055680 00:29:31.146 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3055680 00:29:31.406 1412.00 IOPS, 88.25 MiB/s [2024-11-18T10:57:57.291Z] Received shutdown signal, test time was about 1.098254 seconds 00:29:31.406 00:29:31.406 Latency(us) 00:29:31.406 [2024-11-18T10:57:57.291Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:31.406 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:31.406 Verification LBA range: start 0x0 length 0x400 00:29:31.406 Nvme1n1 : 1.05 182.83 11.43 0.00 0.00 345643.49 21651.15 304475.40 00:29:31.406 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:31.406 Verification LBA range: start 0x0 length 0x400 00:29:31.406 Nvme2n1 : 1.02 187.64 11.73 0.00 0.00 330407.63 40195.41 296708.17 00:29:31.406 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:31.406 Verification LBA range: start 0x0 length 0x400 00:29:31.406 Nvme3n1 : 1.10 233.28 14.58 0.00 0.00 261174.61 23884.23 302921.96 00:29:31.406 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:31.406 Verification LBA range: start 0x0 length 0x400 00:29:31.406 Nvme4n1 : 1.09 234.32 14.65 0.00 0.00 255097.74 23010.42 296708.17 00:29:31.406 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:31.406 Verification LBA range: start 0x0 length 0x400 00:29:31.406 Nvme5n1 : 1.07 183.86 11.49 0.00 0.00 317104.89 1686.95 307582.29 00:29:31.406 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:31.406 Verification LBA range: start 0x0 length 0x400 00:29:31.406 Nvme6n1 : 1.07 179.00 11.19 0.00 0.00 320495.82 26991.12 307582.29 00:29:31.406 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:31.406 Verification LBA range: start 0x0 length 0x400 00:29:31.406 Nvme7n1 : 1.04 184.35 11.52 0.00 0.00 303410.19 23204.60 301368.51 00:29:31.406 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:31.406 Verification LBA range: start 0x0 length 0x400 00:29:31.406 Nvme8n1 : 1.03 186.67 11.67 0.00 0.00 292497.13 23204.60 298261.62 00:29:31.406 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:31.406 Verification LBA range: start 0x0 length 0x400 00:29:31.406 Nvme9n1 : 1.08 178.02 11.13 0.00 0.00 302713.11 26991.12 313796.08 00:29:31.406 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:31.406 Verification LBA range: start 0x0 length 0x400 00:29:31.406 Nvme10n1 : 1.08 177.40 11.09 0.00 0.00 297548.10 25049.32 341758.10 00:29:31.406 [2024-11-18T10:57:57.291Z] =================================================================================================================== 00:29:31.406 [2024-11-18T10:57:57.291Z] Total : 1927.38 120.46 0.00 0.00 299863.38 1686.95 341758.10 00:29:32.339 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:29:33.284 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3055371 00:29:33.284 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:29:33.284 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:33.284 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:33.284 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:33.284 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:33.284 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:33.284 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:29:33.284 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:33.284 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:29:33.284 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:33.284 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:33.284 rmmod nvme_tcp 00:29:33.284 rmmod nvme_fabrics 00:29:33.284 rmmod nvme_keyring 00:29:33.284 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:33.284 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:29:33.284 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:29:33.284 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3055371 ']' 00:29:33.284 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3055371 00:29:33.284 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3055371 ']' 00:29:33.284 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3055371 00:29:33.284 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:33.284 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:33.284 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3055371 00:29:33.543 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:33.543 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:33.543 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3055371' 00:29:33.543 killing process with pid 3055371 00:29:33.543 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3055371 00:29:33.543 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3055371 00:29:36.073 11:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:36.073 11:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:36.073 11:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:36.073 11:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:29:36.073 11:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:29:36.073 11:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:36.073 11:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:29:36.073 11:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:36.073 11:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:36.073 11:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:36.073 11:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:36.073 11:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:38.611 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:38.611 00:29:38.611 real 0m12.971s 00:29:38.611 user 0m43.801s 00:29:38.611 sys 0m2.140s 00:29:38.611 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:38.611 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:38.611 ************************************ 00:29:38.611 END TEST nvmf_shutdown_tc2 00:29:38.611 ************************************ 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:38.611 ************************************ 00:29:38.611 START TEST nvmf_shutdown_tc3 00:29:38.611 ************************************ 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:38.611 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:38.611 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:38.611 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:38.611 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:38.611 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:38.612 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:38.612 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:38.612 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:38.612 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:38.612 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:38.612 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:38.612 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:38.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:38.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:29:38.612 00:29:38.612 --- 10.0.0.2 ping statistics --- 00:29:38.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:38.612 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:29:38.612 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:38.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:38.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:29:38.612 00:29:38.612 --- 10.0.0.1 ping statistics --- 00:29:38.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:38.612 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:29:38.612 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:38.612 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:29:38.612 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:38.612 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:38.612 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:38.612 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:38.612 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:38.612 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:38.612 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:38.612 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:38.612 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:38.612 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:38.612 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:38.612 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3056996 00:29:38.612 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:38.612 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3056996 00:29:38.612 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3056996 ']' 00:29:38.612 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:38.612 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:38.612 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:38.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:38.612 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:38.612 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:38.612 [2024-11-18 11:58:04.400652] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:38.612 [2024-11-18 11:58:04.400810] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:38.870 [2024-11-18 11:58:04.545833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:38.870 [2024-11-18 11:58:04.674308] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:38.870 [2024-11-18 11:58:04.674397] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:38.870 [2024-11-18 11:58:04.674419] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:38.870 [2024-11-18 11:58:04.674439] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:38.870 [2024-11-18 11:58:04.674455] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:38.870 [2024-11-18 11:58:04.677069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:38.870 [2024-11-18 11:58:04.677131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:38.870 [2024-11-18 11:58:04.677244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:38.870 [2024-11-18 11:58:04.677252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:39.810 [2024-11-18 11:58:05.437041] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.810 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:39.810 Malloc1 00:29:39.810 [2024-11-18 11:58:05.587098] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:39.810 Malloc2 00:29:40.069 Malloc3 00:29:40.069 Malloc4 00:29:40.327 Malloc5 00:29:40.327 Malloc6 00:29:40.327 Malloc7 00:29:40.585 Malloc8 00:29:40.585 Malloc9 00:29:40.845 Malloc10 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3057308 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3057308 /var/tmp/bdevperf.sock 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3057308 ']' 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:40.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:40.845 { 00:29:40.845 "params": { 00:29:40.845 "name": "Nvme$subsystem", 00:29:40.845 "trtype": "$TEST_TRANSPORT", 00:29:40.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.845 "adrfam": "ipv4", 00:29:40.845 "trsvcid": "$NVMF_PORT", 00:29:40.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.845 "hdgst": ${hdgst:-false}, 00:29:40.845 "ddgst": ${ddgst:-false} 00:29:40.845 }, 00:29:40.845 "method": "bdev_nvme_attach_controller" 00:29:40.845 } 00:29:40.845 EOF 00:29:40.845 )") 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:40.845 { 00:29:40.845 "params": { 00:29:40.845 "name": "Nvme$subsystem", 00:29:40.845 "trtype": "$TEST_TRANSPORT", 00:29:40.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.845 "adrfam": "ipv4", 00:29:40.845 "trsvcid": "$NVMF_PORT", 00:29:40.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.845 "hdgst": ${hdgst:-false}, 00:29:40.845 "ddgst": ${ddgst:-false} 00:29:40.845 }, 00:29:40.845 "method": "bdev_nvme_attach_controller" 00:29:40.845 } 00:29:40.845 EOF 00:29:40.845 )") 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:40.845 { 00:29:40.845 "params": { 00:29:40.845 "name": "Nvme$subsystem", 00:29:40.845 "trtype": "$TEST_TRANSPORT", 00:29:40.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.845 "adrfam": "ipv4", 00:29:40.845 "trsvcid": "$NVMF_PORT", 00:29:40.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.845 "hdgst": ${hdgst:-false}, 00:29:40.845 "ddgst": ${ddgst:-false} 00:29:40.845 }, 00:29:40.845 "method": "bdev_nvme_attach_controller" 00:29:40.845 } 00:29:40.845 EOF 00:29:40.845 )") 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:40.845 { 00:29:40.845 "params": { 00:29:40.845 "name": "Nvme$subsystem", 00:29:40.845 "trtype": "$TEST_TRANSPORT", 00:29:40.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.845 "adrfam": "ipv4", 00:29:40.845 "trsvcid": "$NVMF_PORT", 00:29:40.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.845 "hdgst": ${hdgst:-false}, 00:29:40.845 "ddgst": ${ddgst:-false} 00:29:40.845 }, 00:29:40.845 "method": "bdev_nvme_attach_controller" 00:29:40.845 } 00:29:40.845 EOF 00:29:40.845 )") 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:40.845 { 00:29:40.845 "params": { 00:29:40.845 "name": "Nvme$subsystem", 00:29:40.845 "trtype": "$TEST_TRANSPORT", 00:29:40.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.845 "adrfam": "ipv4", 00:29:40.845 "trsvcid": "$NVMF_PORT", 00:29:40.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.845 "hdgst": ${hdgst:-false}, 00:29:40.845 "ddgst": ${ddgst:-false} 00:29:40.845 }, 00:29:40.845 "method": "bdev_nvme_attach_controller" 00:29:40.845 } 00:29:40.845 EOF 00:29:40.845 )") 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:40.845 { 00:29:40.845 "params": { 00:29:40.845 "name": "Nvme$subsystem", 00:29:40.845 "trtype": "$TEST_TRANSPORT", 00:29:40.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.845 "adrfam": "ipv4", 00:29:40.845 "trsvcid": "$NVMF_PORT", 00:29:40.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.845 "hdgst": ${hdgst:-false}, 00:29:40.845 "ddgst": ${ddgst:-false} 00:29:40.845 }, 00:29:40.845 "method": "bdev_nvme_attach_controller" 00:29:40.845 } 00:29:40.845 EOF 00:29:40.845 )") 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:40.845 { 00:29:40.845 "params": { 00:29:40.845 "name": "Nvme$subsystem", 00:29:40.845 "trtype": "$TEST_TRANSPORT", 00:29:40.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.845 "adrfam": "ipv4", 00:29:40.845 "trsvcid": "$NVMF_PORT", 00:29:40.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.845 "hdgst": ${hdgst:-false}, 00:29:40.845 "ddgst": ${ddgst:-false} 00:29:40.845 }, 00:29:40.845 "method": "bdev_nvme_attach_controller" 00:29:40.845 } 00:29:40.845 EOF 00:29:40.845 )") 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:40.845 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:40.845 { 00:29:40.845 "params": { 00:29:40.845 "name": "Nvme$subsystem", 00:29:40.845 "trtype": "$TEST_TRANSPORT", 00:29:40.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.845 "adrfam": "ipv4", 00:29:40.845 "trsvcid": "$NVMF_PORT", 00:29:40.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.846 "hdgst": ${hdgst:-false}, 00:29:40.846 "ddgst": ${ddgst:-false} 00:29:40.846 }, 00:29:40.846 "method": "bdev_nvme_attach_controller" 00:29:40.846 } 00:29:40.846 EOF 00:29:40.846 )") 00:29:40.846 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:40.846 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:40.846 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:40.846 { 00:29:40.846 "params": { 00:29:40.846 "name": "Nvme$subsystem", 00:29:40.846 "trtype": "$TEST_TRANSPORT", 00:29:40.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.846 "adrfam": "ipv4", 00:29:40.846 "trsvcid": "$NVMF_PORT", 00:29:40.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.846 "hdgst": ${hdgst:-false}, 00:29:40.846 "ddgst": ${ddgst:-false} 00:29:40.846 }, 00:29:40.846 "method": "bdev_nvme_attach_controller" 00:29:40.846 } 00:29:40.846 EOF 00:29:40.846 )") 00:29:40.846 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:40.846 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:40.846 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:40.846 { 00:29:40.846 "params": { 00:29:40.846 "name": "Nvme$subsystem", 00:29:40.846 "trtype": "$TEST_TRANSPORT", 00:29:40.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.846 "adrfam": "ipv4", 00:29:40.846 "trsvcid": "$NVMF_PORT", 00:29:40.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.846 "hdgst": ${hdgst:-false}, 00:29:40.846 "ddgst": ${ddgst:-false} 00:29:40.846 }, 00:29:40.846 "method": "bdev_nvme_attach_controller" 00:29:40.846 } 00:29:40.846 EOF 00:29:40.846 )") 00:29:40.846 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:40.846 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:29:40.846 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:29:40.846 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:40.846 "params": { 00:29:40.846 "name": "Nvme1", 00:29:40.846 "trtype": "tcp", 00:29:40.846 "traddr": "10.0.0.2", 00:29:40.846 "adrfam": "ipv4", 00:29:40.846 "trsvcid": "4420", 00:29:40.846 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:40.846 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:40.846 "hdgst": false, 00:29:40.846 "ddgst": false 00:29:40.846 }, 00:29:40.846 "method": "bdev_nvme_attach_controller" 00:29:40.846 },{ 00:29:40.846 "params": { 00:29:40.846 "name": "Nvme2", 00:29:40.846 "trtype": "tcp", 00:29:40.846 "traddr": "10.0.0.2", 00:29:40.846 "adrfam": "ipv4", 00:29:40.846 "trsvcid": "4420", 00:29:40.846 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:40.846 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:40.846 "hdgst": false, 00:29:40.846 "ddgst": false 00:29:40.846 }, 00:29:40.846 "method": "bdev_nvme_attach_controller" 00:29:40.846 },{ 00:29:40.846 "params": { 00:29:40.846 "name": "Nvme3", 00:29:40.846 "trtype": "tcp", 00:29:40.846 "traddr": "10.0.0.2", 00:29:40.846 "adrfam": "ipv4", 00:29:40.846 "trsvcid": "4420", 00:29:40.846 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:40.846 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:40.846 "hdgst": false, 00:29:40.846 "ddgst": false 00:29:40.846 }, 00:29:40.846 "method": "bdev_nvme_attach_controller" 00:29:40.846 },{ 00:29:40.846 "params": { 00:29:40.846 "name": "Nvme4", 00:29:40.846 "trtype": "tcp", 00:29:40.846 "traddr": "10.0.0.2", 00:29:40.846 "adrfam": "ipv4", 00:29:40.846 "trsvcid": "4420", 00:29:40.846 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:40.846 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:40.846 "hdgst": false, 00:29:40.846 "ddgst": false 00:29:40.846 }, 00:29:40.846 "method": "bdev_nvme_attach_controller" 00:29:40.846 },{ 00:29:40.846 "params": { 00:29:40.846 "name": "Nvme5", 00:29:40.846 "trtype": "tcp", 00:29:40.846 "traddr": "10.0.0.2", 00:29:40.846 "adrfam": "ipv4", 00:29:40.846 "trsvcid": "4420", 00:29:40.846 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:40.846 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:40.846 "hdgst": false, 00:29:40.846 "ddgst": false 00:29:40.846 }, 00:29:40.846 "method": "bdev_nvme_attach_controller" 00:29:40.846 },{ 00:29:40.846 "params": { 00:29:40.846 "name": "Nvme6", 00:29:40.846 "trtype": "tcp", 00:29:40.846 "traddr": "10.0.0.2", 00:29:40.846 "adrfam": "ipv4", 00:29:40.846 "trsvcid": "4420", 00:29:40.846 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:40.846 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:40.846 "hdgst": false, 00:29:40.846 "ddgst": false 00:29:40.846 }, 00:29:40.846 "method": "bdev_nvme_attach_controller" 00:29:40.846 },{ 00:29:40.846 "params": { 00:29:40.846 "name": "Nvme7", 00:29:40.846 "trtype": "tcp", 00:29:40.846 "traddr": "10.0.0.2", 00:29:40.846 "adrfam": "ipv4", 00:29:40.846 "trsvcid": "4420", 00:29:40.846 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:40.846 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:40.846 "hdgst": false, 00:29:40.846 "ddgst": false 00:29:40.846 }, 00:29:40.846 "method": "bdev_nvme_attach_controller" 00:29:40.846 },{ 00:29:40.846 "params": { 00:29:40.846 "name": "Nvme8", 00:29:40.846 "trtype": "tcp", 00:29:40.846 "traddr": "10.0.0.2", 00:29:40.846 "adrfam": "ipv4", 00:29:40.846 "trsvcid": "4420", 00:29:40.846 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:40.846 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:40.846 "hdgst": false, 00:29:40.846 "ddgst": false 00:29:40.846 }, 00:29:40.846 "method": "bdev_nvme_attach_controller" 00:29:40.846 },{ 00:29:40.846 "params": { 00:29:40.846 "name": "Nvme9", 00:29:40.846 "trtype": "tcp", 00:29:40.846 "traddr": "10.0.0.2", 00:29:40.846 "adrfam": "ipv4", 00:29:40.846 "trsvcid": "4420", 00:29:40.846 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:40.846 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:40.846 "hdgst": false, 00:29:40.846 "ddgst": false 00:29:40.846 }, 00:29:40.846 "method": "bdev_nvme_attach_controller" 00:29:40.846 },{ 00:29:40.846 "params": { 00:29:40.846 "name": "Nvme10", 00:29:40.846 "trtype": "tcp", 00:29:40.846 "traddr": "10.0.0.2", 00:29:40.846 "adrfam": "ipv4", 00:29:40.846 "trsvcid": "4420", 00:29:40.846 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:40.846 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:40.846 "hdgst": false, 00:29:40.846 "ddgst": false 00:29:40.846 }, 00:29:40.846 "method": "bdev_nvme_attach_controller" 00:29:40.846 }' 00:29:40.846 [2024-11-18 11:58:06.612629] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:40.846 [2024-11-18 11:58:06.612784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3057308 ] 00:29:41.105 [2024-11-18 11:58:06.758826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.105 [2024-11-18 11:58:06.887026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:43.012 Running I/O for 10 seconds... 00:29:43.578 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:43.578 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:43.578 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:43.578 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.578 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:43.578 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.578 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:43.578 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:43.578 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:43.578 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:43.578 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:29:43.578 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:29:43.578 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:43.578 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:43.578 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:43.578 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:43.578 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.578 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:43.578 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.578 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=77 00:29:43.578 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 77 -ge 100 ']' 00:29:43.578 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:43.836 1408.00 IOPS, 88.00 MiB/s [2024-11-18T10:58:09.721Z] 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:43.836 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:43.836 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:43.836 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:43.836 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.836 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:43.836 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.836 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:43.836 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:43.836 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:29:43.836 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:29:43.836 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:29:43.836 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3056996 00:29:43.836 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3056996 ']' 00:29:43.836 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3056996 00:29:43.836 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:29:43.836 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:43.836 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3056996 00:29:44.113 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:44.113 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:44.113 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3056996' 00:29:44.113 killing process with pid 3056996 00:29:44.113 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3056996 00:29:44.113 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3056996 00:29:44.113 [2024-11-18 11:58:09.728667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.113 [2024-11-18 11:58:09.733387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.113 [2024-11-18 11:58:09.733444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.113 [2024-11-18 11:58:09.733469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.113 [2024-11-18 11:58:09.733498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.113 [2024-11-18 11:58:09.733521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.113 [2024-11-18 11:58:09.733551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.113 [2024-11-18 11:58:09.733592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.113 [2024-11-18 11:58:09.733612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.113 [2024-11-18 11:58:09.733630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.113 [2024-11-18 11:58:09.733648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.113 [2024-11-18 11:58:09.733666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.113 [2024-11-18 11:58:09.733685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.113 [2024-11-18 11:58:09.733703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.113 [2024-11-18 11:58:09.733721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.113 [2024-11-18 11:58:09.733739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.113 [2024-11-18 11:58:09.733758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.113 [2024-11-18 11:58:09.733795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.733815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.733833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.733851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.733869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.733887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.733904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.733922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.733940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.733958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.733975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.733993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.734011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.734028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.734047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.734065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.734082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.734100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.734118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.734135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.734153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.734171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.734188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.734207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.734224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.734242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.734260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.734281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.734300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.734317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.734334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.734352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.734369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.734387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.734404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.734422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.734440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.734457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.734474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.734499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.734520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.734548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.734566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.734583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.734600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.734617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.734634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.737500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.737533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.737553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.737571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.737588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.737605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.737628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.737647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.737664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.737681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.737699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.737717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.737734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.737751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.737768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.737785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.737802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.737820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.737836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.737853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.737870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.114 [2024-11-18 11:58:09.737888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.737905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.737922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.737940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.737957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.737975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.737992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.738009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.738026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.738043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.738061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.738082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.738100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.738117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.738134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.738152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.738169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.738186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.738204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.738221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.738238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.738256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.738273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.738290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.738307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.738324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.738341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.738358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.738375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.738393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.738410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.738428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.738445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.738462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.738478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.738503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.738523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.738545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.738563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.738580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.738597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.738614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.742287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.742329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.742367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.742386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.742404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.742422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.742441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.742458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.742476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.742501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.742522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.742540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.742560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.742578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.742596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.742613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.742630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.742647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.742665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.742683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.742700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.742718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.742745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.742765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.742784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.742802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.742819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.742838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.742856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.742874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.115 [2024-11-18 11:58:09.742891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.116 [2024-11-18 11:58:09.742909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.116 [2024-11-18 11:58:09.742927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.116 [2024-11-18 11:58:09.742945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.116 [2024-11-18 11:58:09.742963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.116 [2024-11-18 11:58:09.742981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.116 [2024-11-18 11:58:09.742999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.116 [2024-11-18 11:58:09.743016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.116 [2024-11-18 11:58:09.743034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.116 [2024-11-18 11:58:09.743052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.116 [2024-11-18 11:58:09.743070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.116 [2024-11-18 11:58:09.743088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.116 [2024-11-18 11:58:09.743105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.116 [2024-11-18 11:58:09.743123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.116 [2024-11-18 11:58:09.743140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.116 [2024-11-18 11:58:09.743158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.116 [2024-11-18 11:58:09.743175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.116 [2024-11-18 11:58:09.743192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.116 [2024-11-18 11:58:09.743215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.116 [2024-11-18 11:58:09.743234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.116 [2024-11-18 11:58:09.743252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.116 [2024-11-18 11:58:09.743270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.116 [2024-11-18 11:58:09.743288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.116 [2024-11-18 11:58:09.743306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.116 [2024-11-18 11:58:09.743324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.116 [2024-11-18 11:58:09.743342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.116 [2024-11-18 11:58:09.743360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.116 [2024-11-18 11:58:09.743378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.116 [2024-11-18 11:58:09.743395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.116 [2024-11-18 11:58:09.743412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.116 [2024-11-18 11:58:09.743430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.116 [2024-11-18 11:58:09.743447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.116 [2024-11-18 11:58:09.743464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.116 [2024-11-18 11:58:09.744974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.116 [2024-11-18 11:58:09.745035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.116 [2024-11-18 11:58:09.745066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.116 [2024-11-18 11:58:09.745087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.116 [2024-11-18 11:58:09.745109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.116 [2024-11-18 11:58:09.745131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.116 [2024-11-18 11:58:09.745153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.116 [2024-11-18 11:58:09.745173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.116 [2024-11-18 11:58:09.745193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7f00 is same with the state(6) to be set 00:29:44.116 [2024-11-18 11:58:09.745331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.116 [2024-11-18 11:58:09.745361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.116 [2024-11-18 11:58:09.745391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.116 [2024-11-18 11:58:09.745412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.116 [2024-11-18 11:58:09.745433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.116 [2024-11-18 11:58:09.745453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.116 [2024-11-18 11:58:09.745474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.116 [2024-11-18 11:58:09.745503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.116 [2024-11-18 11:58:09.745525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:29:44.116 [2024-11-18 11:58:09.745602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.116 [2024-11-18 11:58:09.745632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.116 [2024-11-18 11:58:09.745654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.116 [2024-11-18 11:58:09.745675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.116 [2024-11-18 11:58:09.745697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.116 [2024-11-18 11:58:09.745717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.116 [2024-11-18 11:58:09.745739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.116 [2024-11-18 11:58:09.745760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.116 [2024-11-18 11:58:09.745779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:29:44.116 [2024-11-18 11:58:09.745848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.116 [2024-11-18 11:58:09.745876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.116 [2024-11-18 11:58:09.745917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.116 [2024-11-18 11:58:09.745939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.116 [2024-11-18 11:58:09.745961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.116 [2024-11-18 11:58:09.745981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.116 [2024-11-18 11:58:09.746002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.117 [2024-11-18 11:58:09.746022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.117 [2024-11-18 11:58:09.746041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3900 is same with the state(6) to be set 00:29:44.117 [2024-11-18 11:58:09.746128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.117 [2024-11-18 11:58:09.746150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.117 [2024-11-18 11:58:09.746169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.117 [2024-11-18 11:58:09.746181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.117 [2024-11-18 11:58:09.746191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.117 [2024-11-18 11:58:09.746211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.117 [2024-11-18 11:58:09.746225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:12[2024-11-18 11:58:09.746229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.117 with the state(6) to be set 00:29:44.117 [2024-11-18 11:58:09.746249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.117 [2024-11-18 11:58:09.746250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.117 [2024-11-18 11:58:09.746267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.117 [2024-11-18 11:58:09.746277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.117 [2024-11-18 11:58:09.746287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.117 [2024-11-18 11:58:09.746299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.117 [2024-11-18 11:58:09.746305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.117 [2024-11-18 11:58:09.746323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.117 [2024-11-18 11:58:09.746324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.117 [2024-11-18 11:58:09.746340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.117 [2024-11-18 11:58:09.746346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.117 [2024-11-18 11:58:09.746358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.117 [2024-11-18 11:58:09.746371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.117 [2024-11-18 11:58:09.746377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.117 [2024-11-18 11:58:09.746393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-18 11:58:09.746395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.117 with the state(6) to be set 00:29:44.117 [2024-11-18 11:58:09.746415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.117 [2024-11-18 11:58:09.746420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.117 [2024-11-18 11:58:09.746433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.117 [2024-11-18 11:58:09.746447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.117 [2024-11-18 11:58:09.746452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.117 [2024-11-18 11:58:09.746470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.117 [2024-11-18 11:58:09.746472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.117 [2024-11-18 11:58:09.746488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.117 [2024-11-18 11:58:09.746512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.117 [2024-11-18 11:58:09.746520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.117 [2024-11-18 11:58:09.746540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same [2024-11-18 11:58:09.746540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:12with the state(6) to be set 00:29:44.117 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.117 [2024-11-18 11:58:09.746570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.117 [2024-11-18 11:58:09.746573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.117 [2024-11-18 11:58:09.746588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.117 [2024-11-18 11:58:09.746597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.117 [2024-11-18 11:58:09.746607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.117 [2024-11-18 11:58:09.746623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.117 [2024-11-18 11:58:09.746633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.117 [2024-11-18 11:58:09.746649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:12[2024-11-18 11:58:09.746651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.117 with the state(6) to be set 00:29:44.117 [2024-11-18 11:58:09.746672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same [2024-11-18 11:58:09.746673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:44.117 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.117 [2024-11-18 11:58:09.746692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.117 [2024-11-18 11:58:09.746699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.117 [2024-11-18 11:58:09.746711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.117 [2024-11-18 11:58:09.746721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.117 [2024-11-18 11:58:09.746729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.117 [2024-11-18 11:58:09.746746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.117 [2024-11-18 11:58:09.746754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.117 [2024-11-18 11:58:09.746769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.117 [2024-11-18 11:58:09.746773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.117 [2024-11-18 11:58:09.746792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.117 [2024-11-18 11:58:09.746794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.117 [2024-11-18 11:58:09.746809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.117 [2024-11-18 11:58:09.746816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.117 [2024-11-18 11:58:09.746827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.117 [2024-11-18 11:58:09.746840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.117 [2024-11-18 11:58:09.746845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.117 [2024-11-18 11:58:09.746863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-18 11:58:09.746863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.117 with the state(6) to be set 00:29:44.117 [2024-11-18 11:58:09.746884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.117 [2024-11-18 11:58:09.746888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.117 [2024-11-18 11:58:09.746910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.118 [2024-11-18 11:58:09.746901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.118 [2024-11-18 11:58:09.746934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.118 [2024-11-18 11:58:09.746940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.118 [2024-11-18 11:58:09.746955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.118 [2024-11-18 11:58:09.746959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.118 [2024-11-18 11:58:09.746978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.118 [2024-11-18 11:58:09.746980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.118 [2024-11-18 11:58:09.746995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.118 [2024-11-18 11:58:09.747001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.118 [2024-11-18 11:58:09.747014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.118 [2024-11-18 11:58:09.747031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.118 [2024-11-18 11:58:09.747031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.118 [2024-11-18 11:58:09.747049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.118 [2024-11-18 11:58:09.747054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.118 [2024-11-18 11:58:09.747067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.118 [2024-11-18 11:58:09.747080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.118 [2024-11-18 11:58:09.747085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.118 [2024-11-18 11:58:09.747102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-18 11:58:09.747103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.118 with the state(6) to be set 00:29:44.118 [2024-11-18 11:58:09.747123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.118 [2024-11-18 11:58:09.747128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.118 [2024-11-18 11:58:09.747142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.118 [2024-11-18 11:58:09.747150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.118 [2024-11-18 11:58:09.747160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.118 [2024-11-18 11:58:09.747176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:1[2024-11-18 11:58:09.747178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.118 with the state(6) to be set 00:29:44.118 [2024-11-18 11:58:09.747199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same [2024-11-18 11:58:09.747199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:44.118 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.118 [2024-11-18 11:58:09.747218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.118 [2024-11-18 11:58:09.747226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.118 [2024-11-18 11:58:09.747236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.118 [2024-11-18 11:58:09.747247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.118 [2024-11-18 11:58:09.747255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.118 [2024-11-18 11:58:09.747273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same [2024-11-18 11:58:09.747273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:1with the state(6) to be set 00:29:44.118 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.118 [2024-11-18 11:58:09.747296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.118 [2024-11-18 11:58:09.747299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.118 [2024-11-18 11:58:09.747314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.118 [2024-11-18 11:58:09.747325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.118 [2024-11-18 11:58:09.747332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.118 [2024-11-18 11:58:09.747347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.118 [2024-11-18 11:58:09.747350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.118 [2024-11-18 11:58:09.747368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.118 [2024-11-18 11:58:09.747371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.118 [2024-11-18 11:58:09.747393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.118 [2024-11-18 11:58:09.747418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.118 [2024-11-18 11:58:09.747439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.118 [2024-11-18 11:58:09.747464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.118 [2024-11-18 11:58:09.747485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.118 [2024-11-18 11:58:09.747520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.118 [2024-11-18 11:58:09.747550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.118 [2024-11-18 11:58:09.747574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.118 [2024-11-18 11:58:09.747595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.118 [2024-11-18 11:58:09.747620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.118 [2024-11-18 11:58:09.747643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.118 [2024-11-18 11:58:09.747668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.119 [2024-11-18 11:58:09.747689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.119 [2024-11-18 11:58:09.747713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.119 [2024-11-18 11:58:09.747734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.119 [2024-11-18 11:58:09.747758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.119 [2024-11-18 11:58:09.747787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.119 [2024-11-18 11:58:09.747813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.119 [2024-11-18 11:58:09.747835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.119 [2024-11-18 11:58:09.747860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.119 [2024-11-18 11:58:09.747882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.119 [2024-11-18 11:58:09.747906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.119 [2024-11-18 11:58:09.747928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.119 [2024-11-18 11:58:09.747951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.119 [2024-11-18 11:58:09.747974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.119 [2024-11-18 11:58:09.747999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.119 [2024-11-18 11:58:09.748020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.119 [2024-11-18 11:58:09.748044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.119 [2024-11-18 11:58:09.748066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.119 [2024-11-18 11:58:09.748090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.119 [2024-11-18 11:58:09.748111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.119 [2024-11-18 11:58:09.748135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.119 [2024-11-18 11:58:09.748157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.119 [2024-11-18 11:58:09.748182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.119 [2024-11-18 11:58:09.748205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.119 [2024-11-18 11:58:09.748230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.119 [2024-11-18 11:58:09.748252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.119 [2024-11-18 11:58:09.748276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.119 [2024-11-18 11:58:09.748298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.119 [2024-11-18 11:58:09.748322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.119 [2024-11-18 11:58:09.748344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.119 [2024-11-18 11:58:09.748372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.119 [2024-11-18 11:58:09.748395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.119 [2024-11-18 11:58:09.748419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.119 [2024-11-18 11:58:09.748442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.119 [2024-11-18 11:58:09.748467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.119 [2024-11-18 11:58:09.748488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.119 [2024-11-18 11:58:09.748523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.119 [2024-11-18 11:58:09.748549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.119 [2024-11-18 11:58:09.748585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.119 [2024-11-18 11:58:09.748617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.119 [2024-11-18 11:58:09.748641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.119 [2024-11-18 11:58:09.748663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.119 [2024-11-18 11:58:09.748688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.119 [2024-11-18 11:58:09.748710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.119 [2024-11-18 11:58:09.748734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.119 [2024-11-18 11:58:09.748755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.119 [2024-11-18 11:58:09.748780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.119 [2024-11-18 11:58:09.748803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.119 [2024-11-18 11:58:09.748827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.119 [2024-11-18 11:58:09.748849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.119 [2024-11-18 11:58:09.748873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.119 [2024-11-18 11:58:09.748895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.119 [2024-11-18 11:58:09.748919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.119 [2024-11-18 11:58:09.748941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.119 [2024-11-18 11:58:09.748966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.119 [2024-11-18 11:58:09.748994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.119 [2024-11-18 11:58:09.748997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.119 [2024-11-18 11:58:09.749019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.119 [2024-11-18 11:58:09.749033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.119 [2024-11-18 11:58:09.749043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.119 [2024-11-18 11:58:09.749055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.119 [2024-11-18 11:58:09.749068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.119 [2024-11-18 11:58:09.749074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.119 [2024-11-18 11:58:09.749091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.119 [2024-11-18 11:58:09.749093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.119 [2024-11-18 11:58:09.749111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.119 [2024-11-18 11:58:09.749115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.119 [2024-11-18 11:58:09.749131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.119 [2024-11-18 11:58:09.749150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.119 [2024-11-18 11:58:09.749151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.119 [2024-11-18 11:58:09.749167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.119 [2024-11-18 11:58:09.749179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.119 [2024-11-18 11:58:09.749185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.119 [2024-11-18 11:58:09.749203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-18 11:58:09.749203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.120 with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.120 [2024-11-18 11:58:09.749241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.120 [2024-11-18 11:58:09.749260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:1[2024-11-18 11:58:09.749278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.120 with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.120 [2024-11-18 11:58:09.749320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.749982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.750006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.750027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.750046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.750064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.750082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.750099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.750117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.750134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.750158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.750178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.750195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.750213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.750231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.750248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.752314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:44.120 [2024-11-18 11:58:09.752378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:44.120 [2024-11-18 11:58:09.752821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.752861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.752883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.752916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.752939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.752958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.752975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.752993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.753012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.753029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.753046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.753063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.753081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.753098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.753116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.120 [2024-11-18 11:58:09.753133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753870] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:44.121 [2024-11-18 11:58:09.753899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.753993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.754011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.754029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.754046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.754422] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:44.121 [2024-11-18 11:58:09.754626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.121 [2024-11-18 11:58:09.754668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:29:44.121 [2024-11-18 11:58:09.754703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.754974] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:44.121 [2024-11-18 11:58:09.755407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:44.121 [2024-11-18 11:58:09.755480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7f00 (9): Bad file descriptor 00:29:44.121 [2024-11-18 11:58:09.755586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.121 [2024-11-18 11:58:09.755629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.121 [2024-11-18 11:58:09.755654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.121 [2024-11-18 11:58:09.755676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.121 [2024-11-18 11:58:09.755697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.121 [2024-11-18 11:58:09.755718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.121 [2024-11-18 11:58:09.755739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.121 [2024-11-18 11:58:09.755761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.121 [2024-11-18 11:58:09.755780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.755856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.121 [2024-11-18 11:58:09.755884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.121 [2024-11-18 11:58:09.755907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.121 [2024-11-18 11:58:09.755927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.121 [2024-11-18 11:58:09.755949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.121 [2024-11-18 11:58:09.755969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.121 [2024-11-18 11:58:09.755990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.121 [2024-11-18 11:58:09.756010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.121 [2024-11-18 11:58:09.756029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4d00 is same with the state(6) to be set 00:29:44.121 [2024-11-18 11:58:09.756093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.121 [2024-11-18 11:58:09.756120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.122 [2024-11-18 11:58:09.756143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.122 [2024-11-18 11:58:09.756163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.122 [2024-11-18 11:58:09.756190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.122 [2024-11-18 11:58:09.756225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.122 [2024-11-18 11:58:09.756251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.122 [2024-11-18 11:58:09.756271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.122 [2024-11-18 11:58:09.756290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5700 is same with the state(6) to be set 00:29:44.122 [2024-11-18 11:58:09.756335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:29:44.122 [2024-11-18 11:58:09.756381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3900 (9): Bad file descriptor 00:29:44.122 [2024-11-18 11:58:09.756531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.122 [2024-11-18 11:58:09.756564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.122 [2024-11-18 11:58:09.756599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.122 [2024-11-18 11:58:09.756623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.122 [2024-11-18 11:58:09.756647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.122 [2024-11-18 11:58:09.756669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.122 [2024-11-18 11:58:09.756700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.122 [2024-11-18 11:58:09.756722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.122 [2024-11-18 11:58:09.756746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.122 [2024-11-18 11:58:09.756768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.122 [2024-11-18 11:58:09.756793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.122 [2024-11-18 11:58:09.756815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.122 [2024-11-18 11:58:09.756839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.122 [2024-11-18 11:58:09.756861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.122 [2024-11-18 11:58:09.756885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.122 [2024-11-18 11:58:09.756906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.122 [2024-11-18 11:58:09.756932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.122 [2024-11-18 11:58:09.756954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.122 [2024-11-18 11:58:09.756979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.122 [2024-11-18 11:58:09.756999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.122 [2024-11-18 11:58:09.757024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.122 [2024-11-18 11:58:09.757045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.122 [2024-11-18 11:58:09.757070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.122 [2024-11-18 11:58:09.757091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.122 [2024-11-18 11:58:09.757115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.122 [2024-11-18 11:58:09.757137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.122 [2024-11-18 11:58:09.757162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.122 [2024-11-18 11:58:09.757183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.122 [2024-11-18 11:58:09.757207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.122 [2024-11-18 11:58:09.757229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.122 [2024-11-18 11:58:09.757253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.122 [2024-11-18 11:58:09.757278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.122 [2024-11-18 11:58:09.757304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.122 [2024-11-18 11:58:09.757325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.122 [2024-11-18 11:58:09.757349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.122 [2024-11-18 11:58:09.757371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.122 [2024-11-18 11:58:09.757396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.122 [2024-11-18 11:58:09.757418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.122 [2024-11-18 11:58:09.757442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.122 [2024-11-18 11:58:09.757463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.122 [2024-11-18 11:58:09.757487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.122 [2024-11-18 11:58:09.757518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.122 [2024-11-18 11:58:09.757513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.122 [2024-11-18 11:58:09.757543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.122 [2024-11-18 11:58:09.757550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.122 [2024-11-18 11:58:09.757565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.122 [2024-11-18 11:58:09.757571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.122 [2024-11-18 11:58:09.757590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same [2024-11-18 11:58:09.757589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:12with the state(6) to be set 00:29:44.122 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.122 [2024-11-18 11:58:09.757611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.122 [2024-11-18 11:58:09.757614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.122 [2024-11-18 11:58:09.757630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.122 [2024-11-18 11:58:09.757638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.122 [2024-11-18 11:58:09.757648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.122 [2024-11-18 11:58:09.757660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.122 [2024-11-18 11:58:09.757666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.122 [2024-11-18 11:58:09.757684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.122 [2024-11-18 11:58:09.757690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.122 [2024-11-18 11:58:09.757706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.122 [2024-11-18 11:58:09.757709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.123 [2024-11-18 11:58:09.757727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.123 [2024-11-18 11:58:09.757730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.123 [2024-11-18 11:58:09.757745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.123 [2024-11-18 11:58:09.757751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.123 [2024-11-18 11:58:09.757763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.123 [2024-11-18 11:58:09.757777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.123 [2024-11-18 11:58:09.757781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.123 [2024-11-18 11:58:09.757799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-18 11:58:09.757800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.123 with the state(6) to be set 00:29:44.123 [2024-11-18 11:58:09.757820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.123 [2024-11-18 11:58:09.757825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.123 [2024-11-18 11:58:09.757837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.123 [2024-11-18 11:58:09.757846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.123 [2024-11-18 11:58:09.757855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.123 [2024-11-18 11:58:09.757870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:12[2024-11-18 11:58:09.757873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.123 with the state(6) to be set 00:29:44.123 [2024-11-18 11:58:09.757892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.123 [2024-11-18 11:58:09.757893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.123 [2024-11-18 11:58:09.757910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.123 [2024-11-18 11:58:09.757918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.123 [2024-11-18 11:58:09.757927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.123 [2024-11-18 11:58:09.757939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-18 11:58:09.757945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.123 with the state(6) to be set 00:29:44.123 [2024-11-18 11:58:09.757964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.123 [2024-11-18 11:58:09.757969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.123 [2024-11-18 11:58:09.757981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.123 [2024-11-18 11:58:09.757992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.123 [2024-11-18 11:58:09.758000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.123 [2024-11-18 11:58:09.758016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:12[2024-11-18 11:58:09.758018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.123 with the state(6) to be set 00:29:44.123 [2024-11-18 11:58:09.758039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same [2024-11-18 11:58:09.758040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:44.123 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.123 [2024-11-18 11:58:09.758058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.123 [2024-11-18 11:58:09.758065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.123 [2024-11-18 11:58:09.758076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.123 [2024-11-18 11:58:09.758086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.123 [2024-11-18 11:58:09.758093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.123 [2024-11-18 11:58:09.758110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:12[2024-11-18 11:58:09.758112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.123 with the state(6) to be set 00:29:44.123 [2024-11-18 11:58:09.758133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same [2024-11-18 11:58:09.758134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:44.123 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.123 [2024-11-18 11:58:09.758152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.123 [2024-11-18 11:58:09.758160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.123 [2024-11-18 11:58:09.758170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.123 [2024-11-18 11:58:09.758182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.123 [2024-11-18 11:58:09.758188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.123 [2024-11-18 11:58:09.758207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same [2024-11-18 11:58:09.758206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:12with the state(6) to be set 00:29:44.123 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.123 [2024-11-18 11:58:09.758231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.123 [2024-11-18 11:58:09.758234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.123 [2024-11-18 11:58:09.758249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.123 [2024-11-18 11:58:09.758259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.123 [2024-11-18 11:58:09.758267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.123 [2024-11-18 11:58:09.758281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.123 [2024-11-18 11:58:09.758285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.123 [2024-11-18 11:58:09.758303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.123 [2024-11-18 11:58:09.758305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.123 [2024-11-18 11:58:09.758320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.123 [2024-11-18 11:58:09.758326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.123 [2024-11-18 11:58:09.758337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.123 [2024-11-18 11:58:09.758351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.124 [2024-11-18 11:58:09.758355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.124 [2024-11-18 11:58:09.758372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-18 11:58:09.758373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.124 with the state(6) to be set 00:29:44.124 [2024-11-18 11:58:09.758394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.124 [2024-11-18 11:58:09.758399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.124 [2024-11-18 11:58:09.758411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.124 [2024-11-18 11:58:09.758421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.124 [2024-11-18 11:58:09.758429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.124 [2024-11-18 11:58:09.758447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same [2024-11-18 11:58:09.758446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:12with the state(6) to be set 00:29:44.124 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.124 [2024-11-18 11:58:09.758466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.124 [2024-11-18 11:58:09.758469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.124 [2024-11-18 11:58:09.758484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.124 [2024-11-18 11:58:09.758514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.124 [2024-11-18 11:58:09.758521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.124 [2024-11-18 11:58:09.758533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.124 [2024-11-18 11:58:09.758545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.124 [2024-11-18 11:58:09.758551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.124 [2024-11-18 11:58:09.758569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.124 [2024-11-18 11:58:09.758570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.124 [2024-11-18 11:58:09.758586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.124 [2024-11-18 11:58:09.758591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.124 [2024-11-18 11:58:09.758605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.124 [2024-11-18 11:58:09.758616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.124 [2024-11-18 11:58:09.758622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.124 [2024-11-18 11:58:09.758637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.124 [2024-11-18 11:58:09.758640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.124 [2024-11-18 11:58:09.758658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.124 [2024-11-18 11:58:09.758661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.124 [2024-11-18 11:58:09.758675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.124 [2024-11-18 11:58:09.758683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.124 [2024-11-18 11:58:09.758708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.124 [2024-11-18 11:58:09.758729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.124 [2024-11-18 11:58:09.758693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.124 [2024-11-18 11:58:09.758754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.124 [2024-11-18 11:58:09.758776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.124 [2024-11-18 11:58:09.758800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.124 [2024-11-18 11:58:09.758822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.124 [2024-11-18 11:58:09.758851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.124 [2024-11-18 11:58:09.758873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.124 [2024-11-18 11:58:09.758897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.124 [2024-11-18 11:58:09.758920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.124 [2024-11-18 11:58:09.758945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.124 [2024-11-18 11:58:09.758967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.124 [2024-11-18 11:58:09.758991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.124 [2024-11-18 11:58:09.759012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.124 [2024-11-18 11:58:09.759038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.124 [2024-11-18 11:58:09.759060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.124 [2024-11-18 11:58:09.759085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.124 [2024-11-18 11:58:09.759106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.124 [2024-11-18 11:58:09.759130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.124 [2024-11-18 11:58:09.759151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.124 [2024-11-18 11:58:09.759177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.124 [2024-11-18 11:58:09.759199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.124 [2024-11-18 11:58:09.759224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.124 [2024-11-18 11:58:09.759246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.124 [2024-11-18 11:58:09.759271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.124 [2024-11-18 11:58:09.759293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.124 [2024-11-18 11:58:09.759318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.124 [2024-11-18 11:58:09.759339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.124 [2024-11-18 11:58:09.759364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.124 [2024-11-18 11:58:09.759385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.124 [2024-11-18 11:58:09.759410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.124 [2024-11-18 11:58:09.759436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.124 [2024-11-18 11:58:09.759461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.124 [2024-11-18 11:58:09.759483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.124 [2024-11-18 11:58:09.759516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.124 [2024-11-18 11:58:09.759539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.124 [2024-11-18 11:58:09.759563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.124 [2024-11-18 11:58:09.759584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.124 [2024-11-18 11:58:09.759606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa200 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.760022] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:44.125 [2024-11-18 11:58:09.760393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:44.125 [2024-11-18 11:58:09.760424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:44.125 [2024-11-18 11:58:09.760448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:44.125 [2024-11-18 11:58:09.760472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:44.125 [2024-11-18 11:58:09.761124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.761987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.762004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.762021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.762038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.762056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.762073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.762090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.762087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:44.125 [2024-11-18 11:58:09.762108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.762125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.762143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.762160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.762176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.762193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.762210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.762227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.762244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.762602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.125 [2024-11-18 11:58:09.762641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:29:44.125 [2024-11-18 11:58:09.762665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:29:44.125 [2024-11-18 11:58:09.763304] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:44.126 [2024-11-18 11:58:09.763575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:29:44.126 [2024-11-18 11:58:09.763699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.763736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.763758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.763776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.763794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.763812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.763828] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:44.126 [2024-11-18 11:58:09.763830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.763857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.763876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.763893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.763911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.763929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.763947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.763965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.763982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:44.126 [2024-11-18 11:58:09.764054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:44.126 [2024-11-18 11:58:09.764072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:44.126 [2024-11-18 11:58:09.764089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:44.126 [2024-11-18 11:58:09.764113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same [2024-11-18 11:58:09.764359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controwith the state(6) to be set 00:29:44.126 ller 00:29:44.126 [2024-11-18 11:58:09.764387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.126 [2024-11-18 11:58:09.764735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f250[2024-11-18 11:58:09.764753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same 0 with addr=10.0.0.2, port=4420 00:29:44.126 with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.126 [2024-11-18 11:58:09.764862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.127 [2024-11-18 11:58:09.764898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:44.127 [2024-11-18 11:58:09.764906] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:44.127 [2024-11-18 11:58:09.765035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:44.127 [2024-11-18 11:58:09.765214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:44.127 [2024-11-18 11:58:09.765243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:44.127 [2024-11-18 11:58:09.765263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:44.127 [2024-11-18 11:58:09.765283] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:44.127 [2024-11-18 11:58:09.765472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.127 [2024-11-18 11:58:09.765516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.127 [2024-11-18 11:58:09.765542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.127 [2024-11-18 11:58:09.765563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.127 [2024-11-18 11:58:09.765584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.127 [2024-11-18 11:58:09.765604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.127 [2024-11-18 11:58:09.765625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.127 [2024-11-18 11:58:09.765645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.127 [2024-11-18 11:58:09.765664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(6) to be set 00:29:44.127 [2024-11-18 11:58:09.765738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.127 [2024-11-18 11:58:09.765765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.127 [2024-11-18 11:58:09.765787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.127 [2024-11-18 11:58:09.765808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.127 [2024-11-18 11:58:09.765829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.127 [2024-11-18 11:58:09.765849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.127 [2024-11-18 11:58:09.765871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.127 [2024-11-18 11:58:09.765891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.127 [2024-11-18 11:58:09.765909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(6) to be set 00:29:44.127 [2024-11-18 11:58:09.765962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:29:44.127 [2024-11-18 11:58:09.766010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4d00 (9): Bad file descriptor 00:29:44.127 [2024-11-18 11:58:09.766055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5700 (9): Bad file descriptor 00:29:44.127 [2024-11-18 11:58:09.766136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.127 [2024-11-18 11:58:09.766164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.127 [2024-11-18 11:58:09.766186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.127 [2024-11-18 11:58:09.766206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.127 [2024-11-18 11:58:09.766226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.127 [2024-11-18 11:58:09.766252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.127 [2024-11-18 11:58:09.766274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.127 [2024-11-18 11:58:09.766293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.127 [2024-11-18 11:58:09.766312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6b00 is same with the state(6) to be set 00:29:44.127 [2024-11-18 11:58:09.766615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.127 [2024-11-18 11:58:09.766648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.127 [2024-11-18 11:58:09.766683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.127 [2024-11-18 11:58:09.766706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.127 [2024-11-18 11:58:09.766731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.127 [2024-11-18 11:58:09.766753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.127 [2024-11-18 11:58:09.766777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.127 [2024-11-18 11:58:09.766799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.127 [2024-11-18 11:58:09.766823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.127 [2024-11-18 11:58:09.766844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.127 [2024-11-18 11:58:09.766868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.127 [2024-11-18 11:58:09.766889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.127 [2024-11-18 11:58:09.766914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.127 [2024-11-18 11:58:09.766936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.127 [2024-11-18 11:58:09.766960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.127 [2024-11-18 11:58:09.766981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.127 [2024-11-18 11:58:09.767005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.127 [2024-11-18 11:58:09.767027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.127 [2024-11-18 11:58:09.767052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.127 [2024-11-18 11:58:09.767074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.127 [2024-11-18 11:58:09.767098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.127 [2024-11-18 11:58:09.767124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.127 [2024-11-18 11:58:09.767150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.127 [2024-11-18 11:58:09.767172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.127 [2024-11-18 11:58:09.767195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.127 [2024-11-18 11:58:09.767230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.127 [2024-11-18 11:58:09.767257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.127 [2024-11-18 11:58:09.767280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.127 [2024-11-18 11:58:09.767304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.127 [2024-11-18 11:58:09.767325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.127 [2024-11-18 11:58:09.767349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.127 [2024-11-18 11:58:09.767371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.127 [2024-11-18 11:58:09.767395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.127 [2024-11-18 11:58:09.767417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.128 [2024-11-18 11:58:09.767441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.128 [2024-11-18 11:58:09.767463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.128 [2024-11-18 11:58:09.767487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.128 [2024-11-18 11:58:09.767517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.128 [2024-11-18 11:58:09.767542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.128 [2024-11-18 11:58:09.767564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.128 [2024-11-18 11:58:09.767587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.128 [2024-11-18 11:58:09.767609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.128 [2024-11-18 11:58:09.767632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.128 [2024-11-18 11:58:09.767654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.128 [2024-11-18 11:58:09.767677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.128 [2024-11-18 11:58:09.767699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.128 [2024-11-18 11:58:09.767730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.128 [2024-11-18 11:58:09.767753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.128 [2024-11-18 11:58:09.767777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.128 [2024-11-18 11:58:09.767798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.128 [2024-11-18 11:58:09.767822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.128 [2024-11-18 11:58:09.767843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.128 [2024-11-18 11:58:09.767867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.128 [2024-11-18 11:58:09.767888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.128 [2024-11-18 11:58:09.767912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.128 [2024-11-18 11:58:09.767933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.128 [2024-11-18 11:58:09.767958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.128 [2024-11-18 11:58:09.767979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.128 [2024-11-18 11:58:09.768003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.128 [2024-11-18 11:58:09.768025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.128 [2024-11-18 11:58:09.768049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.128 [2024-11-18 11:58:09.768070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.128 [2024-11-18 11:58:09.768094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.128 [2024-11-18 11:58:09.768114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.128 [2024-11-18 11:58:09.768138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.128 [2024-11-18 11:58:09.768159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.128 [2024-11-18 11:58:09.768183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.128 [2024-11-18 11:58:09.768205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.128 [2024-11-18 11:58:09.768229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.128 [2024-11-18 11:58:09.768265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.128 [2024-11-18 11:58:09.768288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.128 [2024-11-18 11:58:09.768313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.128 [2024-11-18 11:58:09.768337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.128 [2024-11-18 11:58:09.768358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.128 [2024-11-18 11:58:09.768382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.128 [2024-11-18 11:58:09.768402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.128 [2024-11-18 11:58:09.768425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.128 [2024-11-18 11:58:09.768446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.128 [2024-11-18 11:58:09.768486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.128 [2024-11-18 11:58:09.768522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.128 [2024-11-18 11:58:09.768547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.128 [2024-11-18 11:58:09.768569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.128 [2024-11-18 11:58:09.768593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.128 [2024-11-18 11:58:09.768615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.128 [2024-11-18 11:58:09.768638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.128 [2024-11-18 11:58:09.768659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.128 [2024-11-18 11:58:09.768683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.128 [2024-11-18 11:58:09.768704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.128 [2024-11-18 11:58:09.768728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.128 [2024-11-18 11:58:09.768749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.128 [2024-11-18 11:58:09.768774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.128 [2024-11-18 11:58:09.768795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.128 [2024-11-18 11:58:09.768818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.128 [2024-11-18 11:58:09.768839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.128 [2024-11-18 11:58:09.768863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.128 [2024-11-18 11:58:09.768884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.128 [2024-11-18 11:58:09.768908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.128 [2024-11-18 11:58:09.768934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.129 [2024-11-18 11:58:09.768959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.129 [2024-11-18 11:58:09.768980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.129 [2024-11-18 11:58:09.769003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.129 [2024-11-18 11:58:09.769024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.129 [2024-11-18 11:58:09.769048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.129 [2024-11-18 11:58:09.769069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.129 [2024-11-18 11:58:09.769093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.129 [2024-11-18 11:58:09.769114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.129 [2024-11-18 11:58:09.769137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.129 [2024-11-18 11:58:09.769159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.129 [2024-11-18 11:58:09.769182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.129 [2024-11-18 11:58:09.769203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.129 [2024-11-18 11:58:09.769227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.129 [2024-11-18 11:58:09.769248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.129 [2024-11-18 11:58:09.769273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.129 [2024-11-18 11:58:09.769294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.129 [2024-11-18 11:58:09.769318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.129 [2024-11-18 11:58:09.769339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.129 [2024-11-18 11:58:09.769363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.129 [2024-11-18 11:58:09.769384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.129 [2024-11-18 11:58:09.769407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.129 [2024-11-18 11:58:09.769427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.129 [2024-11-18 11:58:09.769450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.129 [2024-11-18 11:58:09.769472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.129 [2024-11-18 11:58:09.769509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.129 [2024-11-18 11:58:09.769533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.129 [2024-11-18 11:58:09.769557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.129 [2024-11-18 11:58:09.769578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.129 [2024-11-18 11:58:09.769602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.129 [2024-11-18 11:58:09.769624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.129 [2024-11-18 11:58:09.769646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa480 is same with the state(6) to be set 00:29:44.129 [2024-11-18 11:58:09.771324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.129 [2024-11-18 11:58:09.771370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.129 [2024-11-18 11:58:09.771404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.129 [2024-11-18 11:58:09.771427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.129 [2024-11-18 11:58:09.771452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.129 [2024-11-18 11:58:09.771474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.129 [2024-11-18 11:58:09.771505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.129 [2024-11-18 11:58:09.771529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.129 [2024-11-18 11:58:09.771555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.129 [2024-11-18 11:58:09.771577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.129 [2024-11-18 11:58:09.771602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.129 [2024-11-18 11:58:09.771624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.129 [2024-11-18 11:58:09.771649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.129 [2024-11-18 11:58:09.771670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.129 [2024-11-18 11:58:09.771695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.129 [2024-11-18 11:58:09.771716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.129 [2024-11-18 11:58:09.771740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.129 [2024-11-18 11:58:09.771761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.129 [2024-11-18 11:58:09.771792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.129 [2024-11-18 11:58:09.771814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.129 [2024-11-18 11:58:09.771839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.129 [2024-11-18 11:58:09.771860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.129 [2024-11-18 11:58:09.771885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.129 [2024-11-18 11:58:09.771906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.129 [2024-11-18 11:58:09.771947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.129 [2024-11-18 11:58:09.771970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.130 [2024-11-18 11:58:09.771994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.130 [2024-11-18 11:58:09.772015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.130 [2024-11-18 11:58:09.772039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.130 [2024-11-18 11:58:09.772060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.130 [2024-11-18 11:58:09.772085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.130 [2024-11-18 11:58:09.772106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.130 [2024-11-18 11:58:09.772130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.130 [2024-11-18 11:58:09.772151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.130 [2024-11-18 11:58:09.772176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.130 [2024-11-18 11:58:09.772197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.130 [2024-11-18 11:58:09.772221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.130 [2024-11-18 11:58:09.772242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.130 [2024-11-18 11:58:09.772266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.130 [2024-11-18 11:58:09.772288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.130 [2024-11-18 11:58:09.772313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.130 [2024-11-18 11:58:09.772334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.130 [2024-11-18 11:58:09.772357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.130 [2024-11-18 11:58:09.772383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.130 [2024-11-18 11:58:09.772408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.130 [2024-11-18 11:58:09.772429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.130 [2024-11-18 11:58:09.772453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.130 [2024-11-18 11:58:09.772474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.130 [2024-11-18 11:58:09.772505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.130 [2024-11-18 11:58:09.772530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.130 [2024-11-18 11:58:09.772554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.130 [2024-11-18 11:58:09.772576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.130 [2024-11-18 11:58:09.772599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.130 [2024-11-18 11:58:09.772620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.130 [2024-11-18 11:58:09.772643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.130 [2024-11-18 11:58:09.772664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.130 [2024-11-18 11:58:09.772689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.130 [2024-11-18 11:58:09.772711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.130 [2024-11-18 11:58:09.772735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.130 [2024-11-18 11:58:09.772756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.130 [2024-11-18 11:58:09.772780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.130 [2024-11-18 11:58:09.772801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.130 [2024-11-18 11:58:09.772825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.130 [2024-11-18 11:58:09.772846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.130 [2024-11-18 11:58:09.772869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.130 [2024-11-18 11:58:09.772890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.130 [2024-11-18 11:58:09.772914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.130 [2024-11-18 11:58:09.772934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.130 [2024-11-18 11:58:09.772963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.130 [2024-11-18 11:58:09.772984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.130 [2024-11-18 11:58:09.773009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.130 [2024-11-18 11:58:09.773030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.130 [2024-11-18 11:58:09.773054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.130 [2024-11-18 11:58:09.773075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.130 [2024-11-18 11:58:09.773100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.130 [2024-11-18 11:58:09.773121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.130 [2024-11-18 11:58:09.773145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.130 [2024-11-18 11:58:09.773167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.130 [2024-11-18 11:58:09.773190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.130 [2024-11-18 11:58:09.773211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.130 [2024-11-18 11:58:09.773235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.130 [2024-11-18 11:58:09.773257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.130 [2024-11-18 11:58:09.773281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.130 [2024-11-18 11:58:09.773302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.130 [2024-11-18 11:58:09.773325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.130 [2024-11-18 11:58:09.773347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.130 [2024-11-18 11:58:09.773371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.130 [2024-11-18 11:58:09.773393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.130 [2024-11-18 11:58:09.773417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.130 [2024-11-18 11:58:09.773438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.131 [2024-11-18 11:58:09.773462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.131 [2024-11-18 11:58:09.773483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.131 [2024-11-18 11:58:09.773519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.131 [2024-11-18 11:58:09.773547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.131 [2024-11-18 11:58:09.773572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.131 [2024-11-18 11:58:09.773593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.131 [2024-11-18 11:58:09.773618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.131 [2024-11-18 11:58:09.773639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.131 [2024-11-18 11:58:09.773663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.131 [2024-11-18 11:58:09.773684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.131 [2024-11-18 11:58:09.773708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.131 [2024-11-18 11:58:09.773730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.131 [2024-11-18 11:58:09.773753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.131 [2024-11-18 11:58:09.773774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.131 [2024-11-18 11:58:09.773799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.131 [2024-11-18 11:58:09.773820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.131 [2024-11-18 11:58:09.773844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.131 [2024-11-18 11:58:09.773865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.131 [2024-11-18 11:58:09.773889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.131 [2024-11-18 11:58:09.773911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.131 [2024-11-18 11:58:09.773935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.131 [2024-11-18 11:58:09.773956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.131 [2024-11-18 11:58:09.773980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.131 [2024-11-18 11:58:09.774001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.131 [2024-11-18 11:58:09.774026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.131 [2024-11-18 11:58:09.774047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.131 [2024-11-18 11:58:09.774070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.131 [2024-11-18 11:58:09.774092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.131 [2024-11-18 11:58:09.774120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.131 [2024-11-18 11:58:09.774142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.131 [2024-11-18 11:58:09.774166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.131 [2024-11-18 11:58:09.774187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.131 [2024-11-18 11:58:09.774212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.131 [2024-11-18 11:58:09.774233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.131 [2024-11-18 11:58:09.774257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.131 [2024-11-18 11:58:09.774279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.131 [2024-11-18 11:58:09.774302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.131 [2024-11-18 11:58:09.774323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.131 [2024-11-18 11:58:09.774345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fb600 is same with the state(6) to be set 00:29:44.131 [2024-11-18 11:58:09.775935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:44.131 [2024-11-18 11:58:09.775982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:44.131 [2024-11-18 11:58:09.776148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:29:44.131 [2024-11-18 11:58:09.776214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:29:44.131 [2024-11-18 11:58:09.776295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6b00 (9): Bad file descriptor 00:29:44.131 [2024-11-18 11:58:09.776874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.131 [2024-11-18 11:58:09.776919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3900 with addr=10.0.0.2, port=4420 00:29:44.131 [2024-11-18 11:58:09.776945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3900 is same with the state(6) to be set 00:29:44.131 [2024-11-18 11:58:09.777063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.131 [2024-11-18 11:58:09.777098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7f00 with addr=10.0.0.2, port=4420 00:29:44.131 [2024-11-18 11:58:09.777121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7f00 is same with the state(6) to be set 00:29:44.131 [2024-11-18 11:58:09.777784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.131 [2024-11-18 11:58:09.777817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.131 [2024-11-18 11:58:09.777854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.131 [2024-11-18 11:58:09.777877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.131 [2024-11-18 11:58:09.777909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.131 [2024-11-18 11:58:09.777932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.131 [2024-11-18 11:58:09.777957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.131 [2024-11-18 11:58:09.777978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.131 [2024-11-18 11:58:09.778003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.131 [2024-11-18 11:58:09.778025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.131 [2024-11-18 11:58:09.778050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.131 [2024-11-18 11:58:09.778071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.131 [2024-11-18 11:58:09.778161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.131 [2024-11-18 11:58:09.778185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.131 [2024-11-18 11:58:09.778210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.131 [2024-11-18 11:58:09.778232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.131 [2024-11-18 11:58:09.778256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.132 [2024-11-18 11:58:09.778278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.132 [2024-11-18 11:58:09.778302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.132 [2024-11-18 11:58:09.778324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.132 [2024-11-18 11:58:09.778348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.132 [2024-11-18 11:58:09.778370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.132 [2024-11-18 11:58:09.778395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.132 [2024-11-18 11:58:09.778416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.132 [2024-11-18 11:58:09.778441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.132 [2024-11-18 11:58:09.778463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.132 [2024-11-18 11:58:09.778487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.132 [2024-11-18 11:58:09.778518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.132 [2024-11-18 11:58:09.778543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.132 [2024-11-18 11:58:09.778570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.132 [2024-11-18 11:58:09.778596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.132 [2024-11-18 11:58:09.778617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.132 [2024-11-18 11:58:09.778642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.132 [2024-11-18 11:58:09.778663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.132 [2024-11-18 11:58:09.778689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.132 [2024-11-18 11:58:09.778710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.132 [2024-11-18 11:58:09.778735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.132 [2024-11-18 11:58:09.778757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.132 [2024-11-18 11:58:09.778782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.132 [2024-11-18 11:58:09.778804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.132 [2024-11-18 11:58:09.778828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.132 [2024-11-18 11:58:09.778851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.132 [2024-11-18 11:58:09.778877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.132 [2024-11-18 11:58:09.778898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.132 [2024-11-18 11:58:09.778923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.132 [2024-11-18 11:58:09.778945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.132 [2024-11-18 11:58:09.778969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.132 [2024-11-18 11:58:09.778990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.132 [2024-11-18 11:58:09.779015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.132 [2024-11-18 11:58:09.779036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.132 [2024-11-18 11:58:09.779060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.132 [2024-11-18 11:58:09.779081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.132 [2024-11-18 11:58:09.779106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.132 [2024-11-18 11:58:09.779127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.132 [2024-11-18 11:58:09.779152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.132 [2024-11-18 11:58:09.779178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.132 [2024-11-18 11:58:09.779203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.132 [2024-11-18 11:58:09.779225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.132 [2024-11-18 11:58:09.779250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.132 [2024-11-18 11:58:09.779272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.132 [2024-11-18 11:58:09.779297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.132 [2024-11-18 11:58:09.779318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.132 [2024-11-18 11:58:09.779344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.132 [2024-11-18 11:58:09.779365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.132 [2024-11-18 11:58:09.779389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.132 [2024-11-18 11:58:09.779410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.132 [2024-11-18 11:58:09.779435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.132 [2024-11-18 11:58:09.779456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.132 [2024-11-18 11:58:09.779481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.132 [2024-11-18 11:58:09.779511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.132 [2024-11-18 11:58:09.779536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.132 [2024-11-18 11:58:09.779558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.132 [2024-11-18 11:58:09.779582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.132 [2024-11-18 11:58:09.779603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.132 [2024-11-18 11:58:09.779628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.132 [2024-11-18 11:58:09.779650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.132 [2024-11-18 11:58:09.779675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.132 [2024-11-18 11:58:09.779696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.132 [2024-11-18 11:58:09.779720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.132 [2024-11-18 11:58:09.779742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.132 [2024-11-18 11:58:09.779770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.132 [2024-11-18 11:58:09.779793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.132 [2024-11-18 11:58:09.779817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.132 [2024-11-18 11:58:09.779838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.132 [2024-11-18 11:58:09.779862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.132 [2024-11-18 11:58:09.779883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.133 [2024-11-18 11:58:09.779908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.133 [2024-11-18 11:58:09.779930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.133 [2024-11-18 11:58:09.779953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.133 [2024-11-18 11:58:09.779974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.133 [2024-11-18 11:58:09.779999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.133 [2024-11-18 11:58:09.780020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.133 [2024-11-18 11:58:09.780046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.133 [2024-11-18 11:58:09.780067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.133 [2024-11-18 11:58:09.780091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.133 [2024-11-18 11:58:09.780112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.133 [2024-11-18 11:58:09.780137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.133 [2024-11-18 11:58:09.780159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.133 [2024-11-18 11:58:09.780183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.133 [2024-11-18 11:58:09.780204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.133 [2024-11-18 11:58:09.780228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.133 [2024-11-18 11:58:09.780250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.133 [2024-11-18 11:58:09.780274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.133 [2024-11-18 11:58:09.780296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.133 [2024-11-18 11:58:09.780320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.133 [2024-11-18 11:58:09.780346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.133 [2024-11-18 11:58:09.780371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.133 [2024-11-18 11:58:09.780393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.133 [2024-11-18 11:58:09.780419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.133 [2024-11-18 11:58:09.780440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.133 [2024-11-18 11:58:09.780464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.133 [2024-11-18 11:58:09.780485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.133 [2024-11-18 11:58:09.780523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.133 [2024-11-18 11:58:09.780545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.133 [2024-11-18 11:58:09.780570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.133 [2024-11-18 11:58:09.780592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.133 [2024-11-18 11:58:09.780616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.133 [2024-11-18 11:58:09.780637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.133 [2024-11-18 11:58:09.780661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.133 [2024-11-18 11:58:09.780684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.133 [2024-11-18 11:58:09.780709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.133 [2024-11-18 11:58:09.780730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.133 [2024-11-18 11:58:09.780755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.133 [2024-11-18 11:58:09.780776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.133 [2024-11-18 11:58:09.780802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.133 [2024-11-18 11:58:09.780823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.133 [2024-11-18 11:58:09.780848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.133 [2024-11-18 11:58:09.780869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.133 [2024-11-18 11:58:09.780891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa700 is same with the state(6) to be set 00:29:44.133 [2024-11-18 11:58:09.782561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.133 [2024-11-18 11:58:09.782601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.133 [2024-11-18 11:58:09.782637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.133 [2024-11-18 11:58:09.782661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.133 [2024-11-18 11:58:09.782686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.133 [2024-11-18 11:58:09.782708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.133 [2024-11-18 11:58:09.782733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.133 [2024-11-18 11:58:09.782755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.133 [2024-11-18 11:58:09.782779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.133 [2024-11-18 11:58:09.782801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.133 [2024-11-18 11:58:09.782826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.133 [2024-11-18 11:58:09.782865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.133 [2024-11-18 11:58:09.782892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.133 [2024-11-18 11:58:09.782913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.133 [2024-11-18 11:58:09.782938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.133 [2024-11-18 11:58:09.782960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.133 [2024-11-18 11:58:09.782984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.133 [2024-11-18 11:58:09.783005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.133 [2024-11-18 11:58:09.783030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.133 [2024-11-18 11:58:09.783051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.133 [2024-11-18 11:58:09.783075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.133 [2024-11-18 11:58:09.783097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.133 [2024-11-18 11:58:09.783121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.133 [2024-11-18 11:58:09.783142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.133 [2024-11-18 11:58:09.783167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.134 [2024-11-18 11:58:09.783188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.134 [2024-11-18 11:58:09.783217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.134 [2024-11-18 11:58:09.783240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.134 [2024-11-18 11:58:09.783265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.134 [2024-11-18 11:58:09.783286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.134 [2024-11-18 11:58:09.783311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.134 [2024-11-18 11:58:09.783332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.134 [2024-11-18 11:58:09.783356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.134 [2024-11-18 11:58:09.783377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.134 [2024-11-18 11:58:09.783403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.134 [2024-11-18 11:58:09.783424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.134 [2024-11-18 11:58:09.783448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.134 [2024-11-18 11:58:09.783470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.134 [2024-11-18 11:58:09.783501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.134 [2024-11-18 11:58:09.783525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.134 [2024-11-18 11:58:09.783550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.134 [2024-11-18 11:58:09.783571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.134 [2024-11-18 11:58:09.783595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.134 [2024-11-18 11:58:09.783617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.134 [2024-11-18 11:58:09.783642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.134 [2024-11-18 11:58:09.783662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.134 [2024-11-18 11:58:09.783688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.134 [2024-11-18 11:58:09.783709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.134 [2024-11-18 11:58:09.783733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.134 [2024-11-18 11:58:09.783754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.134 [2024-11-18 11:58:09.783778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.134 [2024-11-18 11:58:09.783803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.134 [2024-11-18 11:58:09.783830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.134 [2024-11-18 11:58:09.783851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.134 [2024-11-18 11:58:09.783876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.134 [2024-11-18 11:58:09.783896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.134 [2024-11-18 11:58:09.783921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.134 [2024-11-18 11:58:09.783942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.134 [2024-11-18 11:58:09.783965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.134 [2024-11-18 11:58:09.783986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.134 [2024-11-18 11:58:09.784010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.134 [2024-11-18 11:58:09.784031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.134 [2024-11-18 11:58:09.784056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.134 [2024-11-18 11:58:09.784077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.134 [2024-11-18 11:58:09.784101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.134 [2024-11-18 11:58:09.784121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.134 [2024-11-18 11:58:09.784146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.134 [2024-11-18 11:58:09.784167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.134 [2024-11-18 11:58:09.784192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.134 [2024-11-18 11:58:09.784212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.134 [2024-11-18 11:58:09.784237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.134 [2024-11-18 11:58:09.784257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.134 [2024-11-18 11:58:09.784282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.134 [2024-11-18 11:58:09.784303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.134 [2024-11-18 11:58:09.784328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.134 [2024-11-18 11:58:09.784348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.134 [2024-11-18 11:58:09.784377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.134 [2024-11-18 11:58:09.784399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.134 [2024-11-18 11:58:09.784424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.134 [2024-11-18 11:58:09.784445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.134 [2024-11-18 11:58:09.784469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.134 [2024-11-18 11:58:09.784496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.134 [2024-11-18 11:58:09.784523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.134 [2024-11-18 11:58:09.784545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.134 [2024-11-18 11:58:09.784571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.134 [2024-11-18 11:58:09.784591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.135 [2024-11-18 11:58:09.784615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.135 [2024-11-18 11:58:09.784637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.135 [2024-11-18 11:58:09.784661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.135 [2024-11-18 11:58:09.784682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.135 [2024-11-18 11:58:09.784705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.135 [2024-11-18 11:58:09.784726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.135 [2024-11-18 11:58:09.784750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.135 [2024-11-18 11:58:09.784771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.135 [2024-11-18 11:58:09.784796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.135 [2024-11-18 11:58:09.784817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.135 [2024-11-18 11:58:09.784841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.135 [2024-11-18 11:58:09.784862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.135 [2024-11-18 11:58:09.784887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.135 [2024-11-18 11:58:09.784907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.135 [2024-11-18 11:58:09.784932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.135 [2024-11-18 11:58:09.784957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.135 [2024-11-18 11:58:09.784983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.135 [2024-11-18 11:58:09.785003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.135 [2024-11-18 11:58:09.785028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.135 [2024-11-18 11:58:09.785049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.135 [2024-11-18 11:58:09.785074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.135 [2024-11-18 11:58:09.785095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.135 [2024-11-18 11:58:09.785119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.135 [2024-11-18 11:58:09.785140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.135 [2024-11-18 11:58:09.785165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.135 [2024-11-18 11:58:09.785185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.135 [2024-11-18 11:58:09.785209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.135 [2024-11-18 11:58:09.785230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.135 [2024-11-18 11:58:09.785256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.135 [2024-11-18 11:58:09.785277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.135 [2024-11-18 11:58:09.785302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.135 [2024-11-18 11:58:09.785323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.135 [2024-11-18 11:58:09.785347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.135 [2024-11-18 11:58:09.785368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.135 [2024-11-18 11:58:09.785393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.135 [2024-11-18 11:58:09.785414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.135 [2024-11-18 11:58:09.785438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.135 [2024-11-18 11:58:09.785459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.135 [2024-11-18 11:58:09.785483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.135 [2024-11-18 11:58:09.785512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.135 [2024-11-18 11:58:09.785542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.135 [2024-11-18 11:58:09.785563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.135 [2024-11-18 11:58:09.785585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa980 is same with the state(6) to be set 00:29:44.135 [2024-11-18 11:58:09.787181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.135 [2024-11-18 11:58:09.787212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.135 [2024-11-18 11:58:09.787260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.135 [2024-11-18 11:58:09.787282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.135 [2024-11-18 11:58:09.787308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.135 [2024-11-18 11:58:09.787329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.135 [2024-11-18 11:58:09.787354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.135 [2024-11-18 11:58:09.787375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.135 [2024-11-18 11:58:09.787400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.135 [2024-11-18 11:58:09.787422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.135 [2024-11-18 11:58:09.787469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.135 [2024-11-18 11:58:09.787499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.135 [2024-11-18 11:58:09.787526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.135 [2024-11-18 11:58:09.787548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.135 [2024-11-18 11:58:09.787572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.135 [2024-11-18 11:58:09.787594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.135 [2024-11-18 11:58:09.787619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.135 [2024-11-18 11:58:09.787640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.135 [2024-11-18 11:58:09.787664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.135 [2024-11-18 11:58:09.787685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.135 [2024-11-18 11:58:09.787710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.135 [2024-11-18 11:58:09.787731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.135 [2024-11-18 11:58:09.787764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.135 [2024-11-18 11:58:09.787786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.135 [2024-11-18 11:58:09.787810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.136 [2024-11-18 11:58:09.787832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.136 [2024-11-18 11:58:09.787856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.136 [2024-11-18 11:58:09.787878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.136 [2024-11-18 11:58:09.787902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.136 [2024-11-18 11:58:09.787924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.136 [2024-11-18 11:58:09.787949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.136 [2024-11-18 11:58:09.787970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.136 [2024-11-18 11:58:09.787995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.136 [2024-11-18 11:58:09.788016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.136 [2024-11-18 11:58:09.788041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.136 [2024-11-18 11:58:09.788062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.136 [2024-11-18 11:58:09.788086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.136 [2024-11-18 11:58:09.788107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.136 [2024-11-18 11:58:09.788132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.136 [2024-11-18 11:58:09.788152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.136 [2024-11-18 11:58:09.788177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.136 [2024-11-18 11:58:09.788198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.136 [2024-11-18 11:58:09.788221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.136 [2024-11-18 11:58:09.788242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.136 [2024-11-18 11:58:09.788266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.136 [2024-11-18 11:58:09.788288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.136 [2024-11-18 11:58:09.788312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.136 [2024-11-18 11:58:09.788337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.136 [2024-11-18 11:58:09.788363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.136 [2024-11-18 11:58:09.788385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.136 [2024-11-18 11:58:09.788409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.136 [2024-11-18 11:58:09.788430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.136 [2024-11-18 11:58:09.788454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.136 [2024-11-18 11:58:09.788475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.136 [2024-11-18 11:58:09.788507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.136 [2024-11-18 11:58:09.788531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.136 [2024-11-18 11:58:09.788555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.136 [2024-11-18 11:58:09.788576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.136 [2024-11-18 11:58:09.788600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.136 [2024-11-18 11:58:09.788621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.136 [2024-11-18 11:58:09.788646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.136 [2024-11-18 11:58:09.788668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.136 [2024-11-18 11:58:09.788693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.136 [2024-11-18 11:58:09.788714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.136 [2024-11-18 11:58:09.788737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.136 [2024-11-18 11:58:09.788758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.136 [2024-11-18 11:58:09.788783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.136 [2024-11-18 11:58:09.788804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.136 [2024-11-18 11:58:09.788830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.136 [2024-11-18 11:58:09.788851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.136 [2024-11-18 11:58:09.788875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.136 [2024-11-18 11:58:09.788896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.136 [2024-11-18 11:58:09.788924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.136 [2024-11-18 11:58:09.788946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.136 [2024-11-18 11:58:09.788970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.136 [2024-11-18 11:58:09.788991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.136 [2024-11-18 11:58:09.789017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.136 [2024-11-18 11:58:09.789038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.136 [2024-11-18 11:58:09.789062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.136 [2024-11-18 11:58:09.789083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.136 [2024-11-18 11:58:09.789107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.136 [2024-11-18 11:58:09.789128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.136 [2024-11-18 11:58:09.789155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.136 [2024-11-18 11:58:09.789176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.136 [2024-11-18 11:58:09.789201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.136 [2024-11-18 11:58:09.789222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.136 [2024-11-18 11:58:09.789246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.136 [2024-11-18 11:58:09.789267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.136 [2024-11-18 11:58:09.789291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.136 [2024-11-18 11:58:09.789312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.136 [2024-11-18 11:58:09.789336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.136 [2024-11-18 11:58:09.789357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.137 [2024-11-18 11:58:09.789382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.137 [2024-11-18 11:58:09.789403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.137 [2024-11-18 11:58:09.789427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.137 [2024-11-18 11:58:09.789448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.137 [2024-11-18 11:58:09.789473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.137 [2024-11-18 11:58:09.789509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.137 [2024-11-18 11:58:09.789536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.137 [2024-11-18 11:58:09.789558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.137 [2024-11-18 11:58:09.789581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.137 [2024-11-18 11:58:09.789603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.137 [2024-11-18 11:58:09.789627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.137 [2024-11-18 11:58:09.789648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.137 [2024-11-18 11:58:09.789672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.137 [2024-11-18 11:58:09.789694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.137 [2024-11-18 11:58:09.789719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.137 [2024-11-18 11:58:09.789739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.137 [2024-11-18 11:58:09.789763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.137 [2024-11-18 11:58:09.789784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.137 [2024-11-18 11:58:09.789808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.137 [2024-11-18 11:58:09.789830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.137 [2024-11-18 11:58:09.789855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.137 [2024-11-18 11:58:09.789876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.137 [2024-11-18 11:58:09.789900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.137 [2024-11-18 11:58:09.789921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.137 [2024-11-18 11:58:09.789946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.137 [2024-11-18 11:58:09.789967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.137 [2024-11-18 11:58:09.789992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.137 [2024-11-18 11:58:09.790013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.137 [2024-11-18 11:58:09.790036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.137 [2024-11-18 11:58:09.790057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.137 [2024-11-18 11:58:09.790087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.137 [2024-11-18 11:58:09.790109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.137 [2024-11-18 11:58:09.790133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.137 [2024-11-18 11:58:09.790154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.137 [2024-11-18 11:58:09.790178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.137 [2024-11-18 11:58:09.790200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.137 [2024-11-18 11:58:09.790223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fac00 is same with the state(6) to be set 00:29:44.137 [2024-11-18 11:58:09.792444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:44.137 [2024-11-18 11:58:09.792521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:44.137 [2024-11-18 11:58:09.792554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:44.137 [2024-11-18 11:58:09.792580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:44.137 [2024-11-18 11:58:09.792605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:44.137 [2024-11-18 11:58:09.792780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3900 (9): Bad file descriptor 00:29:44.137 [2024-11-18 11:58:09.792821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7f00 (9): Bad file descriptor 00:29:44.137 [2024-11-18 11:58:09.792964] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:29:44.137 [2024-11-18 11:58:09.793007] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:29:44.137 [2024-11-18 11:58:09.793352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.137 [2024-11-18 11:58:09.793384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.137 [2024-11-18 11:58:09.793429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.137 [2024-11-18 11:58:09.793456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.137 [2024-11-18 11:58:09.793498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.137 [2024-11-18 11:58:09.793525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.137 [2024-11-18 11:58:09.793559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.137 [2024-11-18 11:58:09.793584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.137 [2024-11-18 11:58:09.793618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.137 [2024-11-18 11:58:09.793642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.137 [2024-11-18 11:58:09.793681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.137 [2024-11-18 11:58:09.793707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.137 [2024-11-18 11:58:09.793740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.137 [2024-11-18 11:58:09.793765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.137 [2024-11-18 11:58:09.793798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.137 [2024-11-18 11:58:09.793823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.137 [2024-11-18 11:58:09.793856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.137 [2024-11-18 11:58:09.793881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.137 [2024-11-18 11:58:09.793914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.137 [2024-11-18 11:58:09.793939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.137 [2024-11-18 11:58:09.793972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.137 [2024-11-18 11:58:09.793997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.137 [2024-11-18 11:58:09.794030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.137 [2024-11-18 11:58:09.794055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.137 [2024-11-18 11:58:09.794088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.137 [2024-11-18 11:58:09.794114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.137 [2024-11-18 11:58:09.794148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.138 [2024-11-18 11:58:09.794173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.138 [2024-11-18 11:58:09.794206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.138 [2024-11-18 11:58:09.794232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.138 [2024-11-18 11:58:09.794265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.138 [2024-11-18 11:58:09.794289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.138 [2024-11-18 11:58:09.794324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.138 [2024-11-18 11:58:09.794348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.138 [2024-11-18 11:58:09.794381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.138 [2024-11-18 11:58:09.794405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.138 [2024-11-18 11:58:09.794442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.138 [2024-11-18 11:58:09.794469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.138 [2024-11-18 11:58:09.794509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.138 [2024-11-18 11:58:09.794536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.138 [2024-11-18 11:58:09.794570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.138 [2024-11-18 11:58:09.794599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.138 [2024-11-18 11:58:09.794633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.138 [2024-11-18 11:58:09.794658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.138 [2024-11-18 11:58:09.794691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.138 [2024-11-18 11:58:09.794715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.138 [2024-11-18 11:58:09.794748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.138 [2024-11-18 11:58:09.794775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.138 [2024-11-18 11:58:09.794808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.138 [2024-11-18 11:58:09.794833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.138 [2024-11-18 11:58:09.794865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.138 [2024-11-18 11:58:09.794890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.138 [2024-11-18 11:58:09.794923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.138 [2024-11-18 11:58:09.794949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.138 [2024-11-18 11:58:09.794982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.138 [2024-11-18 11:58:09.795006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.138 [2024-11-18 11:58:09.795040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.138 [2024-11-18 11:58:09.795064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.138 [2024-11-18 11:58:09.795098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.138 [2024-11-18 11:58:09.795123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.138 [2024-11-18 11:58:09.795156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.138 [2024-11-18 11:58:09.795185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.138 [2024-11-18 11:58:09.795218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.138 [2024-11-18 11:58:09.795244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.138 [2024-11-18 11:58:09.795282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.138 [2024-11-18 11:58:09.795308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.138 [2024-11-18 11:58:09.795342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.138 [2024-11-18 11:58:09.795367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.138 [2024-11-18 11:58:09.795399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.138 [2024-11-18 11:58:09.795425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.138 [2024-11-18 11:58:09.795457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.138 [2024-11-18 11:58:09.795482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.138 [2024-11-18 11:58:09.795522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.138 [2024-11-18 11:58:09.795548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.138 [2024-11-18 11:58:09.795580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.138 [2024-11-18 11:58:09.795606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.138 [2024-11-18 11:58:09.795640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.138 [2024-11-18 11:58:09.795664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.138 [2024-11-18 11:58:09.795698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.138 [2024-11-18 11:58:09.795724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.138 [2024-11-18 11:58:09.795757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.138 [2024-11-18 11:58:09.795781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.138 [2024-11-18 11:58:09.795814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.138 [2024-11-18 11:58:09.795841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.138 [2024-11-18 11:58:09.795873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.138 [2024-11-18 11:58:09.795898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.138 [2024-11-18 11:58:09.795935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.138 [2024-11-18 11:58:09.795961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.138 [2024-11-18 11:58:09.795995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.138 [2024-11-18 11:58:09.796022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.138 [2024-11-18 11:58:09.796055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.138 [2024-11-18 11:58:09.796080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.138 [2024-11-18 11:58:09.796112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.138 [2024-11-18 11:58:09.796138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.138 [2024-11-18 11:58:09.796172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.138 [2024-11-18 11:58:09.796197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.138 [2024-11-18 11:58:09.796234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.139 [2024-11-18 11:58:09.796260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.139 [2024-11-18 11:58:09.796294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.139 [2024-11-18 11:58:09.796319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.139 [2024-11-18 11:58:09.796352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.139 [2024-11-18 11:58:09.796377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.139 [2024-11-18 11:58:09.796409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.139 [2024-11-18 11:58:09.796435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.139 [2024-11-18 11:58:09.796469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.139 [2024-11-18 11:58:09.796501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.139 [2024-11-18 11:58:09.796536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.139 [2024-11-18 11:58:09.796561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.139 [2024-11-18 11:58:09.796595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.139 [2024-11-18 11:58:09.796620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.139 [2024-11-18 11:58:09.796653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.139 [2024-11-18 11:58:09.796684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.139 [2024-11-18 11:58:09.796718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.139 [2024-11-18 11:58:09.796743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.139 [2024-11-18 11:58:09.796776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.139 [2024-11-18 11:58:09.796802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.139 [2024-11-18 11:58:09.796836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.139 [2024-11-18 11:58:09.796863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.139 [2024-11-18 11:58:09.796895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.139 [2024-11-18 11:58:09.796920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.139 [2024-11-18 11:58:09.796954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.139 [2024-11-18 11:58:09.796980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.139 [2024-11-18 11:58:09.797013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.139 [2024-11-18 11:58:09.797037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.139 [2024-11-18 11:58:09.797069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.139 [2024-11-18 11:58:09.797094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.139 [2024-11-18 11:58:09.797127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.139 [2024-11-18 11:58:09.797152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.139 [2024-11-18 11:58:09.798038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fb380 is same with the state(6) to be set 00:29:44.139 [2024-11-18 11:58:09.798346] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:29:44.139 [2024-11-18 11:58:09.798705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.139 [2024-11-18 11:58:09.798747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:29:44.139 [2024-11-18 11:58:09.798771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:29:44.139 [2024-11-18 11:58:09.798905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.139 [2024-11-18 11:58:09.798939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:29:44.139 [2024-11-18 11:58:09.798962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:29:44.139 [2024-11-18 11:58:09.799098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.139 [2024-11-18 11:58:09.799136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=4420 00:29:44.139 [2024-11-18 11:58:09.799159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(6) to be set 00:29:44.139 [2024-11-18 11:58:09.799295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.139 [2024-11-18 11:58:09.799328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4d00 with addr=10.0.0.2, port=4420 00:29:44.139 [2024-11-18 11:58:09.799351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4d00 is same with the state(6) to be set 00:29:44.139 [2024-11-18 11:58:09.799454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.139 [2024-11-18 11:58:09.799487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5700 with addr=10.0.0.2, port=4420 00:29:44.139 [2024-11-18 11:58:09.799518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5700 is same with the state(6) to be set 00:29:44.139 [2024-11-18 11:58:09.799541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:44.139 [2024-11-18 11:58:09.799562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:44.139 [2024-11-18 11:58:09.799583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:44.139 [2024-11-18 11:58:09.799606] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:44.139 [2024-11-18 11:58:09.799628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:44.139 [2024-11-18 11:58:09.799646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:44.139 [2024-11-18 11:58:09.799664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:44.139 [2024-11-18 11:58:09.799682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:44.139 [2024-11-18 11:58:09.801297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.139 [2024-11-18 11:58:09.801345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.139 [2024-11-18 11:58:09.801380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.139 [2024-11-18 11:58:09.801403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.139 [2024-11-18 11:58:09.801429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.140 [2024-11-18 11:58:09.801450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.140 [2024-11-18 11:58:09.801475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.140 [2024-11-18 11:58:09.801506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.140 [2024-11-18 11:58:09.801533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.140 [2024-11-18 11:58:09.801555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.140 [2024-11-18 11:58:09.801580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.140 [2024-11-18 11:58:09.801606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.140 [2024-11-18 11:58:09.801632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.140 [2024-11-18 11:58:09.801653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.140 [2024-11-18 11:58:09.801678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.140 [2024-11-18 11:58:09.801699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.140 [2024-11-18 11:58:09.801723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.140 [2024-11-18 11:58:09.801744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.140 [2024-11-18 11:58:09.801768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.140 [2024-11-18 11:58:09.801791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.140 [2024-11-18 11:58:09.801815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.140 [2024-11-18 11:58:09.801837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.140 [2024-11-18 11:58:09.801861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.140 [2024-11-18 11:58:09.801883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.140 [2024-11-18 11:58:09.801907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.140 [2024-11-18 11:58:09.801929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.140 [2024-11-18 11:58:09.801953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.140 [2024-11-18 11:58:09.801974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.140 [2024-11-18 11:58:09.801997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.140 [2024-11-18 11:58:09.802019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.140 [2024-11-18 11:58:09.802043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.140 [2024-11-18 11:58:09.802065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.140 [2024-11-18 11:58:09.802088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.140 [2024-11-18 11:58:09.802109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.140 [2024-11-18 11:58:09.802133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.140 [2024-11-18 11:58:09.802155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.140 [2024-11-18 11:58:09.802183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.140 [2024-11-18 11:58:09.802206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.140 [2024-11-18 11:58:09.802230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.140 [2024-11-18 11:58:09.802251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.140 [2024-11-18 11:58:09.802275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.140 [2024-11-18 11:58:09.802297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.140 [2024-11-18 11:58:09.802321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.140 [2024-11-18 11:58:09.802342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.140 [2024-11-18 11:58:09.802366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.140 [2024-11-18 11:58:09.802387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.140 [2024-11-18 11:58:09.802411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.140 [2024-11-18 11:58:09.802432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.140 [2024-11-18 11:58:09.802456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.140 [2024-11-18 11:58:09.802477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.140 [2024-11-18 11:58:09.802508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.140 [2024-11-18 11:58:09.802533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.140 [2024-11-18 11:58:09.802557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.140 [2024-11-18 11:58:09.802579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.140 [2024-11-18 11:58:09.802602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.140 [2024-11-18 11:58:09.802623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.140 [2024-11-18 11:58:09.802647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.140 [2024-11-18 11:58:09.802669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.140 [2024-11-18 11:58:09.802693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.140 [2024-11-18 11:58:09.802714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.140 [2024-11-18 11:58:09.802738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.140 [2024-11-18 11:58:09.802763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.140 [2024-11-18 11:58:09.802789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.140 [2024-11-18 11:58:09.802811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.140 [2024-11-18 11:58:09.802834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.140 [2024-11-18 11:58:09.802855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.140 [2024-11-18 11:58:09.802879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.140 [2024-11-18 11:58:09.802901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.140 [2024-11-18 11:58:09.802925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.140 [2024-11-18 11:58:09.802946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.140 [2024-11-18 11:58:09.802969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.140 [2024-11-18 11:58:09.802991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.140 [2024-11-18 11:58:09.803015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.140 [2024-11-18 11:58:09.803037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.140 [2024-11-18 11:58:09.803061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.140 [2024-11-18 11:58:09.803082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.140 [2024-11-18 11:58:09.803107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.140 [2024-11-18 11:58:09.803128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.140 [2024-11-18 11:58:09.803153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.141 [2024-11-18 11:58:09.803175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.141 [2024-11-18 11:58:09.803199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.141 [2024-11-18 11:58:09.803220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.141 [2024-11-18 11:58:09.803245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.141 [2024-11-18 11:58:09.803266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.141 [2024-11-18 11:58:09.803291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.141 [2024-11-18 11:58:09.803312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.141 [2024-11-18 11:58:09.803341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.141 [2024-11-18 11:58:09.803363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.141 [2024-11-18 11:58:09.803387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.141 [2024-11-18 11:58:09.803408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.141 [2024-11-18 11:58:09.803433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.141 [2024-11-18 11:58:09.803453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.141 [2024-11-18 11:58:09.803478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.141 [2024-11-18 11:58:09.803507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.141 [2024-11-18 11:58:09.803532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.141 [2024-11-18 11:58:09.803554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.141 [2024-11-18 11:58:09.803578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.141 [2024-11-18 11:58:09.803599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.141 [2024-11-18 11:58:09.803623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.141 [2024-11-18 11:58:09.803644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.141 [2024-11-18 11:58:09.803667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.141 [2024-11-18 11:58:09.803689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.141 [2024-11-18 11:58:09.803712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.141 [2024-11-18 11:58:09.803747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.141 [2024-11-18 11:58:09.803774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.141 [2024-11-18 11:58:09.803795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.141 [2024-11-18 11:58:09.803819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.141 [2024-11-18 11:58:09.803840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.141 [2024-11-18 11:58:09.803864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.141 [2024-11-18 11:58:09.803885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.141 [2024-11-18 11:58:09.803909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.141 [2024-11-18 11:58:09.803934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.141 [2024-11-18 11:58:09.803960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.141 [2024-11-18 11:58:09.803981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.141 [2024-11-18 11:58:09.804005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.141 [2024-11-18 11:58:09.804026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.141 [2024-11-18 11:58:09.804050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.141 [2024-11-18 11:58:09.804071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.141 [2024-11-18 11:58:09.804095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.141 [2024-11-18 11:58:09.804116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.141 [2024-11-18 11:58:09.804140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.141 [2024-11-18 11:58:09.804161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.141 [2024-11-18 11:58:09.804184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.141 [2024-11-18 11:58:09.804205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.141 [2024-11-18 11:58:09.804229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.141 [2024-11-18 11:58:09.804250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.141 [2024-11-18 11:58:09.804273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.141 [2024-11-18 11:58:09.804294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.141 [2024-11-18 11:58:09.804315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fae80 is same with the state(6) to be set 00:29:44.141 [2024-11-18 11:58:09.805856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.141 [2024-11-18 11:58:09.805888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.141 [2024-11-18 11:58:09.805920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.141 [2024-11-18 11:58:09.805942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.141 [2024-11-18 11:58:09.805966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.141 [2024-11-18 11:58:09.805987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.141 [2024-11-18 11:58:09.806011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.141 [2024-11-18 11:58:09.806037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.141 [2024-11-18 11:58:09.806063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.141 [2024-11-18 11:58:09.806084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.141 [2024-11-18 11:58:09.806109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.141 [2024-11-18 11:58:09.806130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.141 [2024-11-18 11:58:09.806155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.141 [2024-11-18 11:58:09.806176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.141 [2024-11-18 11:58:09.806200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.141 [2024-11-18 11:58:09.806221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.141 [2024-11-18 11:58:09.806245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.141 [2024-11-18 11:58:09.806275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.141 [2024-11-18 11:58:09.806316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.141 [2024-11-18 11:58:09.806342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.141 [2024-11-18 11:58:09.806366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.141 [2024-11-18 11:58:09.806387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.141 [2024-11-18 11:58:09.806411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.141 [2024-11-18 11:58:09.806433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.142 [2024-11-18 11:58:09.806456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.142 [2024-11-18 11:58:09.806477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.142 [2024-11-18 11:58:09.806510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.142 [2024-11-18 11:58:09.806534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.142 [2024-11-18 11:58:09.806558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.142 [2024-11-18 11:58:09.806578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.142 [2024-11-18 11:58:09.806602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.142 [2024-11-18 11:58:09.806623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.142 [2024-11-18 11:58:09.806652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.142 [2024-11-18 11:58:09.806673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.142 [2024-11-18 11:58:09.806698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.142 [2024-11-18 11:58:09.806718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.142 [2024-11-18 11:58:09.806742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.142 [2024-11-18 11:58:09.806763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.142 [2024-11-18 11:58:09.806787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.142 [2024-11-18 11:58:09.806808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.142 [2024-11-18 11:58:09.806832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.142 [2024-11-18 11:58:09.806853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.142 [2024-11-18 11:58:09.806877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.142 [2024-11-18 11:58:09.806899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.142 [2024-11-18 11:58:09.806923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.142 [2024-11-18 11:58:09.806944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.142 [2024-11-18 11:58:09.806968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.142 [2024-11-18 11:58:09.806988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.142 [2024-11-18 11:58:09.807012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.142 [2024-11-18 11:58:09.807033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.142 [2024-11-18 11:58:09.807058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.142 [2024-11-18 11:58:09.807078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.142 [2024-11-18 11:58:09.807103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.142 [2024-11-18 11:58:09.807124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.142 [2024-11-18 11:58:09.807148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.142 [2024-11-18 11:58:09.807169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.142 [2024-11-18 11:58:09.807193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.142 [2024-11-18 11:58:09.807218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.142 [2024-11-18 11:58:09.807243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.142 [2024-11-18 11:58:09.807265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.142 [2024-11-18 11:58:09.807289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.142 [2024-11-18 11:58:09.807310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.142 [2024-11-18 11:58:09.807334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.142 [2024-11-18 11:58:09.807354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.142 [2024-11-18 11:58:09.807379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.142 [2024-11-18 11:58:09.807400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.142 [2024-11-18 11:58:09.807423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.142 [2024-11-18 11:58:09.807444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.142 [2024-11-18 11:58:09.807468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.142 [2024-11-18 11:58:09.807496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.142 [2024-11-18 11:58:09.807524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.142 [2024-11-18 11:58:09.807546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.142 [2024-11-18 11:58:09.807570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.142 [2024-11-18 11:58:09.807591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.142 [2024-11-18 11:58:09.807616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.142 [2024-11-18 11:58:09.807637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.142 [2024-11-18 11:58:09.807662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.142 [2024-11-18 11:58:09.807683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.142 [2024-11-18 11:58:09.807707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.142 [2024-11-18 11:58:09.807728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.142 [2024-11-18 11:58:09.807752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.142 [2024-11-18 11:58:09.807773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.142 [2024-11-18 11:58:09.807802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.142 [2024-11-18 11:58:09.807824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.142 [2024-11-18 11:58:09.807848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.142 [2024-11-18 11:58:09.807869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.142 [2024-11-18 11:58:09.807894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.142 [2024-11-18 11:58:09.807916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.142 [2024-11-18 11:58:09.807940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.142 [2024-11-18 11:58:09.807961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.142 [2024-11-18 11:58:09.807985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.142 [2024-11-18 11:58:09.808006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.142 [2024-11-18 11:58:09.808031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.142 [2024-11-18 11:58:09.808053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.142 [2024-11-18 11:58:09.808077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.142 [2024-11-18 11:58:09.808099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.142 [2024-11-18 11:58:09.808123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.142 [2024-11-18 11:58:09.808144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.143 [2024-11-18 11:58:09.808168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.143 [2024-11-18 11:58:09.808190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.143 [2024-11-18 11:58:09.808214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.143 [2024-11-18 11:58:09.808235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.143 [2024-11-18 11:58:09.808274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.143 [2024-11-18 11:58:09.808297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.143 [2024-11-18 11:58:09.808321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.143 [2024-11-18 11:58:09.808342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.143 [2024-11-18 11:58:09.808366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.143 [2024-11-18 11:58:09.808388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.143 [2024-11-18 11:58:09.808419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.143 [2024-11-18 11:58:09.808442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.143 [2024-11-18 11:58:09.808465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.143 [2024-11-18 11:58:09.808487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.143 [2024-11-18 11:58:09.808529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.143 [2024-11-18 11:58:09.808551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.143 [2024-11-18 11:58:09.808576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.143 [2024-11-18 11:58:09.808597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.143 [2024-11-18 11:58:09.808622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.143 [2024-11-18 11:58:09.808643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.143 [2024-11-18 11:58:09.808666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.143 [2024-11-18 11:58:09.808687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.143 [2024-11-18 11:58:09.808712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.143 [2024-11-18 11:58:09.808733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.143 [2024-11-18 11:58:09.808757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.143 [2024-11-18 11:58:09.808778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.143 [2024-11-18 11:58:09.808803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.143 [2024-11-18 11:58:09.808825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.143 [2024-11-18 11:58:09.808850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.143 [2024-11-18 11:58:09.808870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.143 [2024-11-18 11:58:09.808892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fb100 is same with the state(6) to be set 00:29:44.143 [2024-11-18 11:58:09.813664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:29:44.143 task offset: 24576 on job bdev=Nvme1n1 fails 00:29:44.143 00:29:44.143 Latency(us) 00:29:44.143 [2024-11-18T10:58:10.028Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.143 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.143 Job: Nvme1n1 ended in about 1.15 seconds with error 00:29:44.143 Verification LBA range: start 0x0 length 0x400 00:29:44.143 Nvme1n1 : 1.15 167.27 10.45 55.76 0.00 284170.90 6699.24 323116.75 00:29:44.143 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.143 Job: Nvme2n1 ended in about 1.16 seconds with error 00:29:44.143 Verification LBA range: start 0x0 length 0x400 00:29:44.143 Nvme2n1 : 1.16 165.80 10.36 55.27 0.00 281711.31 16699.54 313796.08 00:29:44.143 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.143 Job: Nvme3n1 ended in about 1.17 seconds with error 00:29:44.143 Verification LBA range: start 0x0 length 0x400 00:29:44.143 Nvme3n1 : 1.17 164.51 10.28 54.84 0.00 278982.16 21262.79 295154.73 00:29:44.143 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.143 Job: Nvme4n1 ended in about 1.18 seconds with error 00:29:44.143 Verification LBA range: start 0x0 length 0x400 00:29:44.143 Nvme4n1 : 1.18 162.94 10.18 54.31 0.00 276817.16 34369.99 282727.16 00:29:44.143 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.143 Job: Nvme5n1 ended in about 1.18 seconds with error 00:29:44.143 Verification LBA range: start 0x0 length 0x400 00:29:44.143 Nvme5n1 : 1.18 108.20 6.76 54.10 0.00 364068.66 25243.50 323116.75 00:29:44.143 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.143 Job: Nvme6n1 ended in about 1.19 seconds with error 00:29:44.143 Verification LBA range: start 0x0 length 0x400 00:29:44.143 Nvme6n1 : 1.19 107.78 6.74 53.89 0.00 358971.92 26796.94 357292.56 00:29:44.143 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.143 Job: Nvme7n1 ended in about 1.20 seconds with error 00:29:44.143 Verification LBA range: start 0x0 length 0x400 00:29:44.143 Nvme7n1 : 1.20 159.77 9.99 53.26 0.00 267689.53 25243.50 326223.64 00:29:44.143 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.143 Job: Nvme8n1 ended in about 1.21 seconds with error 00:29:44.143 Verification LBA range: start 0x0 length 0x400 00:29:44.143 Nvme8n1 : 1.21 159.17 9.95 53.06 0.00 263943.02 21651.15 315349.52 00:29:44.143 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.143 Job: Nvme9n1 ended in about 1.19 seconds with error 00:29:44.143 Verification LBA range: start 0x0 length 0x400 00:29:44.143 Nvme9n1 : 1.19 107.18 6.70 53.59 0.00 341044.59 25243.50 361952.90 00:29:44.143 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.143 Job: Nvme10n1 ended in about 1.17 seconds with error 00:29:44.143 Verification LBA range: start 0x0 length 0x400 00:29:44.143 Nvme10n1 : 1.17 109.23 6.83 54.61 0.00 327539.93 25437.68 309135.74 00:29:44.143 [2024-11-18T10:58:10.028Z] =================================================================================================================== 00:29:44.143 [2024-11-18T10:58:10.028Z] Total : 1411.84 88.24 542.68 0.00 299670.32 6699.24 361952.90 00:29:44.143 [2024-11-18 11:58:09.903106] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:44.143 [2024-11-18 11:58:09.903248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:29:44.143 [2024-11-18 11:58:09.903303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:29:44.143 [2024-11-18 11:58:09.903428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:29:44.143 [2024-11-18 11:58:09.903471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:44.143 [2024-11-18 11:58:09.903507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:29:44.143 [2024-11-18 11:58:09.903538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4d00 (9): Bad file descriptor 00:29:44.143 [2024-11-18 11:58:09.903567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5700 (9): Bad file descriptor 00:29:44.143 [2024-11-18 11:58:09.903680] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:29:44.143 [2024-11-18 11:58:09.903718] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:29:44.143 [2024-11-18 11:58:09.903748] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:29:44.143 [2024-11-18 11:58:09.903777] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:29:44.143 [2024-11-18 11:58:09.903807] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:29:44.144 [2024-11-18 11:58:09.905291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.144 [2024-11-18 11:58:09.905338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6100 with addr=10.0.0.2, port=4420 00:29:44.144 [2024-11-18 11:58:09.905365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(6) to be set 00:29:44.144 [2024-11-18 11:58:09.905516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.144 [2024-11-18 11:58:09.905551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6b00 with addr=10.0.0.2, port=4420 00:29:44.144 [2024-11-18 11:58:09.905575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6b00 is same with the state(6) to be set 00:29:44.144 [2024-11-18 11:58:09.905715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.144 [2024-11-18 11:58:09.905750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:29:44.144 [2024-11-18 11:58:09.905773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(6) to be set 00:29:44.144 [2024-11-18 11:58:09.905796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:44.144 [2024-11-18 11:58:09.905815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:44.144 [2024-11-18 11:58:09.905838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:44.144 [2024-11-18 11:58:09.905861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:44.144 [2024-11-18 11:58:09.905883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:44.144 [2024-11-18 11:58:09.905902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:44.144 [2024-11-18 11:58:09.905920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:44.144 [2024-11-18 11:58:09.905938] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:44.144 [2024-11-18 11:58:09.905957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:44.144 [2024-11-18 11:58:09.905975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:44.144 [2024-11-18 11:58:09.905993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:44.144 [2024-11-18 11:58:09.906012] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:44.144 [2024-11-18 11:58:09.906032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:44.144 [2024-11-18 11:58:09.906049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:44.144 [2024-11-18 11:58:09.906073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:44.144 [2024-11-18 11:58:09.906091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:44.144 [2024-11-18 11:58:09.906112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:44.144 [2024-11-18 11:58:09.906130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:44.144 [2024-11-18 11:58:09.906147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:44.144 [2024-11-18 11:58:09.906164] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:44.144 [2024-11-18 11:58:09.906207] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:29:44.144 [2024-11-18 11:58:09.906237] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:29:44.144 [2024-11-18 11:58:09.907580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:44.144 [2024-11-18 11:58:09.907619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:44.144 [2024-11-18 11:58:09.907758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:29:44.144 [2024-11-18 11:58:09.907796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6b00 (9): Bad file descriptor 00:29:44.144 [2024-11-18 11:58:09.907826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:29:44.144 [2024-11-18 11:58:09.908129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:44.144 [2024-11-18 11:58:09.908179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:44.144 [2024-11-18 11:58:09.908205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:44.144 [2024-11-18 11:58:09.908244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:44.144 [2024-11-18 11:58:09.908277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:44.144 [2024-11-18 11:58:09.908486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.144 [2024-11-18 11:58:09.908536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7f00 with addr=10.0.0.2, port=4420 00:29:44.144 [2024-11-18 11:58:09.908560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7f00 is same with the state(6) to be set 00:29:44.144 [2024-11-18 11:58:09.908694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.144 [2024-11-18 11:58:09.908728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3900 with addr=10.0.0.2, port=4420 00:29:44.144 [2024-11-18 11:58:09.908751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3900 is same with the state(6) to be set 00:29:44.144 [2024-11-18 11:58:09.908773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:29:44.144 [2024-11-18 11:58:09.908792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:29:44.144 [2024-11-18 11:58:09.908811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:29:44.144 [2024-11-18 11:58:09.908831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:29:44.144 [2024-11-18 11:58:09.908852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:29:44.144 [2024-11-18 11:58:09.908875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:29:44.144 [2024-11-18 11:58:09.908895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:44.144 [2024-11-18 11:58:09.908914] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:29:44.144 [2024-11-18 11:58:09.908934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:29:44.144 [2024-11-18 11:58:09.908952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:29:44.144 [2024-11-18 11:58:09.908970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:29:44.144 [2024-11-18 11:58:09.908988] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:29:44.144 [2024-11-18 11:58:09.909202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.144 [2024-11-18 11:58:09.909240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5700 with addr=10.0.0.2, port=4420 00:29:44.144 [2024-11-18 11:58:09.909262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5700 is same with the state(6) to be set 00:29:44.144 [2024-11-18 11:58:09.909401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.144 [2024-11-18 11:58:09.909435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4d00 with addr=10.0.0.2, port=4420 00:29:44.144 [2024-11-18 11:58:09.909458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4d00 is same with the state(6) to be set 00:29:44.144 [2024-11-18 11:58:09.909598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.144 [2024-11-18 11:58:09.909633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=4420 00:29:44.144 [2024-11-18 11:58:09.909656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(6) to be set 00:29:44.145 [2024-11-18 11:58:09.909797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.145 [2024-11-18 11:58:09.909830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:29:44.145 [2024-11-18 11:58:09.909853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:29:44.145 [2024-11-18 11:58:09.909985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.145 [2024-11-18 11:58:09.910018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:29:44.145 [2024-11-18 11:58:09.910040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:29:44.145 [2024-11-18 11:58:09.910067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7f00 (9): Bad file descriptor 00:29:44.145 [2024-11-18 11:58:09.910097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3900 (9): Bad file descriptor 00:29:44.145 [2024-11-18 11:58:09.910185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5700 (9): Bad file descriptor 00:29:44.145 [2024-11-18 11:58:09.910220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4d00 (9): Bad file descriptor 00:29:44.145 [2024-11-18 11:58:09.910249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:29:44.145 [2024-11-18 11:58:09.910277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:44.145 [2024-11-18 11:58:09.910304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:29:44.145 [2024-11-18 11:58:09.910333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:44.145 [2024-11-18 11:58:09.910354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:44.145 [2024-11-18 11:58:09.910373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:44.145 [2024-11-18 11:58:09.910392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:44.145 [2024-11-18 11:58:09.910412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:44.145 [2024-11-18 11:58:09.910429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:44.145 [2024-11-18 11:58:09.910462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:44.145 [2024-11-18 11:58:09.910483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:44.145 [2024-11-18 11:58:09.910558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:44.145 [2024-11-18 11:58:09.910583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:44.145 [2024-11-18 11:58:09.910602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:44.145 [2024-11-18 11:58:09.910620] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:44.145 [2024-11-18 11:58:09.910641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:44.145 [2024-11-18 11:58:09.910659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:44.145 [2024-11-18 11:58:09.910677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:44.145 [2024-11-18 11:58:09.910694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:44.145 [2024-11-18 11:58:09.910713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:44.145 [2024-11-18 11:58:09.910731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:44.145 [2024-11-18 11:58:09.910748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:44.145 [2024-11-18 11:58:09.910767] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:44.145 [2024-11-18 11:58:09.910786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:44.145 [2024-11-18 11:58:09.910803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:44.145 [2024-11-18 11:58:09.910821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:44.145 [2024-11-18 11:58:09.910839] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:44.145 [2024-11-18 11:58:09.910858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:44.145 [2024-11-18 11:58:09.910875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:44.145 [2024-11-18 11:58:09.910894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:44.145 [2024-11-18 11:58:09.910911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:46.770 11:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3057308 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3057308 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3057308 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:47.709 rmmod nvme_tcp 00:29:47.709 rmmod nvme_fabrics 00:29:47.709 rmmod nvme_keyring 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3056996 ']' 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3056996 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3056996 ']' 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3056996 00:29:47.709 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3056996) - No such process 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3056996 is not found' 00:29:47.709 Process with pid 3056996 is not found 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:47.709 11:58:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:50.254 00:29:50.254 real 0m11.576s 00:29:50.254 user 0m34.015s 00:29:50.254 sys 0m2.037s 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:50.254 ************************************ 00:29:50.254 END TEST nvmf_shutdown_tc3 00:29:50.254 ************************************ 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:50.254 ************************************ 00:29:50.254 START TEST nvmf_shutdown_tc4 00:29:50.254 ************************************ 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:50.254 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:50.255 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:50.255 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:50.255 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:50.255 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:50.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:50.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:29:50.255 00:29:50.255 --- 10.0.0.2 ping statistics --- 00:29:50.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.255 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:50.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:50.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:29:50.255 00:29:50.255 --- 10.0.0.1 ping statistics --- 00:29:50.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.255 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:29:50.255 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:50.256 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:50.256 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:50.256 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:50.256 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:50.256 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:50.256 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:50.256 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:50.256 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:50.256 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:50.256 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:50.256 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3058487 00:29:50.256 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:50.256 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3058487 00:29:50.256 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3058487 ']' 00:29:50.256 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:50.256 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:50.256 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:50.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:50.256 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:50.256 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:50.256 [2024-11-18 11:58:15.944127] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:50.256 [2024-11-18 11:58:15.944285] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:50.256 [2024-11-18 11:58:16.092156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:50.516 [2024-11-18 11:58:16.223558] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:50.516 [2024-11-18 11:58:16.223641] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:50.516 [2024-11-18 11:58:16.223664] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:50.516 [2024-11-18 11:58:16.223685] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:50.516 [2024-11-18 11:58:16.223701] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:50.516 [2024-11-18 11:58:16.226345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:50.516 [2024-11-18 11:58:16.226409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:50.516 [2024-11-18 11:58:16.226454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:50.516 [2024-11-18 11:58:16.226475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:51.083 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:51.083 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:29:51.083 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:51.083 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:51.083 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:51.343 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:51.343 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:51.343 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.343 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:51.343 [2024-11-18 11:58:16.973569] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:51.343 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.343 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:51.343 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:51.343 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:51.343 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:51.343 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:51.343 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:51.343 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:51.343 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:51.343 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:51.343 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:51.343 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:51.343 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:51.343 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:51.343 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:51.343 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:51.343 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:51.343 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:51.343 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:51.343 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:51.343 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:51.343 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:51.343 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:51.343 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:51.343 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:51.343 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:51.343 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:51.343 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.343 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:51.343 Malloc1 00:29:51.343 [2024-11-18 11:58:17.127436] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:51.343 Malloc2 00:29:51.602 Malloc3 00:29:51.602 Malloc4 00:29:51.861 Malloc5 00:29:51.861 Malloc6 00:29:51.861 Malloc7 00:29:52.121 Malloc8 00:29:52.121 Malloc9 00:29:52.382 Malloc10 00:29:52.382 11:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.382 11:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:52.382 11:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:52.382 11:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:52.382 11:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3058797 00:29:52.382 11:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:29:52.382 11:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:29:52.382 [2024-11-18 11:58:18.195284] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:57.663 11:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:57.663 11:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3058487 00:29:57.663 11:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3058487 ']' 00:29:57.663 11:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3058487 00:29:57.663 11:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:29:57.663 11:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:57.663 11:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3058487 00:29:57.663 11:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:57.663 11:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:57.663 11:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3058487' 00:29:57.663 killing process with pid 3058487 00:29:57.663 11:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3058487 00:29:57.663 11:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3058487 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 [2024-11-18 11:58:23.152723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.663 starting I/O failed: -6 00:29:57.663 starting I/O failed: -6 00:29:57.663 starting I/O failed: -6 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 [2024-11-18 11:58:23.155143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 [2024-11-18 11:58:23.155986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c880 is same with the state(6) to be set 00:29:57.663 [2024-11-18 11:58:23.156034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c880 is same with the state(6) to be set 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 starting I/O failed: -6 00:29:57.663 [2024-11-18 11:58:23.156057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c880 is same with the state(6) to be set 00:29:57.663 [2024-11-18 11:58:23.156075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c880 is same with the state(6) to be set 00:29:57.663 [2024-11-18 11:58:23.156093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c880 is same with the state(6) to be set 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 [2024-11-18 11:58:23.156110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c880 is same with the state(6) to be set 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.663 [2024-11-18 11:58:23.156127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c880 is same with the state(6) to be set 00:29:57.663 starting I/O failed: -6 00:29:57.663 [2024-11-18 11:58:23.156145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c880 is same with the state(6) to be set 00:29:57.663 [2024-11-18 11:58:23.156162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c880 is same with the state(6) to be set 00:29:57.663 [2024-11-18 11:58:23.156179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c880 is same with the state(6) to be set 00:29:57.663 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 [2024-11-18 11:58:23.157756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.664 starting I/O failed: -6 00:29:57.664 [2024-11-18 11:58:23.157882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000cc80 is same with the state(6) to be set 00:29:57.664 [2024-11-18 11:58:23.157929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000cc80 is same with the state(6) to be set 00:29:57.664 [2024-11-18 11:58:23.157953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000cc80 is same with the state(6) to be set 00:29:57.664 [2024-11-18 11:58:23.157975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000cc80 is same with the state(6) to be set 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 [2024-11-18 11:58:23.157995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000cc80 is same with the state(6) to be set 00:29:57.664 starting I/O failed: -6 00:29:57.664 [2024-11-18 11:58:23.158015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000cc80 is same with the state(6) to be set 00:29:57.664 [2024-11-18 11:58:23.158034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000cc80 is same with the state(6) to be set 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 [2024-11-18 11:58:23.158052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000cc80 is same with the state(6) to be set 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 [2024-11-18 11:58:23.170917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.664 NVMe io qpair process completion error 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 starting I/O failed: -6 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.664 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 [2024-11-18 11:58:23.173040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 [2024-11-18 11:58:23.175269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 [2024-11-18 11:58:23.177916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.665 Write completed with error (sct=0, sc=8) 00:29:57.665 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 [2024-11-18 11:58:23.187695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.666 NVMe io qpair process completion error 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 [2024-11-18 11:58:23.189836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 Write completed with error (sct=0, sc=8) 00:29:57.666 starting I/O failed: -6 00:29:57.667 [2024-11-18 11:58:23.192086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 [2024-11-18 11:58:23.194787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.667 starting I/O failed: -6 00:29:57.667 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 [2024-11-18 11:58:23.204842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.668 NVMe io qpair process completion error 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 [2024-11-18 11:58:23.206993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 [2024-11-18 11:58:23.209216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 Write completed with error (sct=0, sc=8) 00:29:57.668 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 [2024-11-18 11:58:23.211862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 [2024-11-18 11:58:23.224322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.669 NVMe io qpair process completion error 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 [2024-11-18 11:58:23.226357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.669 starting I/O failed: -6 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.669 starting I/O failed: -6 00:29:57.669 Write completed with error (sct=0, sc=8) 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 [2024-11-18 11:58:23.228452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 [2024-11-18 11:58:23.231178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.670 Write completed with error (sct=0, sc=8) 00:29:57.670 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 [2024-11-18 11:58:23.243569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.671 NVMe io qpair process completion error 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 [2024-11-18 11:58:23.245789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 [2024-11-18 11:58:23.247889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.671 starting I/O failed: -6 00:29:57.671 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 [2024-11-18 11:58:23.250573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 [2024-11-18 11:58:23.264552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.672 NVMe io qpair process completion error 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 [2024-11-18 11:58:23.266891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 starting I/O failed: -6 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.672 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 [2024-11-18 11:58:23.269102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 [2024-11-18 11:58:23.271773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.673 starting I/O failed: -6 00:29:57.673 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 [2024-11-18 11:58:23.284361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.674 NVMe io qpair process completion error 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 [2024-11-18 11:58:23.286246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 [2024-11-18 11:58:23.288395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.674 Write completed with error (sct=0, sc=8) 00:29:57.674 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 [2024-11-18 11:58:23.291205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 [2024-11-18 11:58:23.300603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.675 NVMe io qpair process completion error 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 starting I/O failed: -6 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.675 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 [2024-11-18 11:58:23.302440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 [2024-11-18 11:58:23.304367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 [2024-11-18 11:58:23.307122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.676 Write completed with error (sct=0, sc=8) 00:29:57.676 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 [2024-11-18 11:58:23.319649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.677 NVMe io qpair process completion error 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 [2024-11-18 11:58:23.321742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 [2024-11-18 11:58:23.323968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.677 starting I/O failed: -6 00:29:57.677 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 [2024-11-18 11:58:23.326785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 Write completed with error (sct=0, sc=8) 00:29:57.678 starting I/O failed: -6 00:29:57.678 [2024-11-18 11:58:23.342276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.678 NVMe io qpair process completion error 00:29:57.678 Initializing NVMe Controllers 00:29:57.678 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:57.678 Controller IO queue size 128, less than required. 00:29:57.679 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:57.679 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:29:57.679 Controller IO queue size 128, less than required. 00:29:57.679 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:57.679 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:29:57.679 Controller IO queue size 128, less than required. 00:29:57.679 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:57.679 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:29:57.679 Controller IO queue size 128, less than required. 00:29:57.679 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:57.679 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:29:57.679 Controller IO queue size 128, less than required. 00:29:57.679 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:57.679 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:29:57.679 Controller IO queue size 128, less than required. 00:29:57.679 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:57.679 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:29:57.679 Controller IO queue size 128, less than required. 00:29:57.679 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:57.679 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:29:57.679 Controller IO queue size 128, less than required. 00:29:57.679 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:57.679 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:29:57.679 Controller IO queue size 128, less than required. 00:29:57.679 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:57.679 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:29:57.679 Controller IO queue size 128, less than required. 00:29:57.679 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:57.679 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:57.679 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:29:57.679 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:29:57.679 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:29:57.679 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:29:57.679 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:29:57.679 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:29:57.679 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:29:57.679 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:29:57.679 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:29:57.679 Initialization complete. Launching workers. 00:29:57.679 ======================================================== 00:29:57.679 Latency(us) 00:29:57.679 Device Information : IOPS MiB/s Average min max 00:29:57.679 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1397.40 60.04 91631.51 2344.48 174491.39 00:29:57.679 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1374.48 59.06 93313.10 2223.90 232735.91 00:29:57.679 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1360.08 58.44 94468.35 2217.18 194918.78 00:29:57.679 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1375.58 59.11 93594.71 1635.75 210125.03 00:29:57.679 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1384.96 59.51 93160.66 1635.87 225760.42 00:29:57.679 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1354.18 58.19 95520.48 2206.15 243131.97 00:29:57.679 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1325.81 56.97 97777.20 2227.90 224728.12 00:29:57.679 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1348.51 57.94 96275.95 1728.54 235279.53 00:29:57.679 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1350.69 58.04 96333.03 1720.72 286763.98 00:29:57.679 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1394.78 59.93 89726.91 1769.84 158588.31 00:29:57.679 ======================================================== 00:29:57.679 Total : 13666.48 587.23 94145.63 1635.75 286763.98 00:29:57.679 00:29:57.679 [2024-11-18 11:58:23.371633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015980 is same with the state(6) to be set 00:29:57.679 [2024-11-18 11:58:23.371773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015e80 is same with the state(6) to be set 00:29:57.679 [2024-11-18 11:58:23.371867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000017780 is same with the state(6) to be set 00:29:57.679 [2024-11-18 11:58:23.371950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016880 is same with the state(6) to be set 00:29:57.679 [2024-11-18 11:58:23.372034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000017c80 is same with the state(6) to be set 00:29:57.679 [2024-11-18 11:58:23.372118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000018680 is same with the state(6) to be set 00:29:57.679 [2024-11-18 11:58:23.372201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000017280 is same with the state(6) to be set 00:29:57.679 [2024-11-18 11:58:23.372283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016d80 is same with the state(6) to be set 00:29:57.679 [2024-11-18 11:58:23.372372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000018180 is same with the state(6) to be set 00:29:57.679 [2024-11-18 11:58:23.372471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(6) to be set 00:29:57.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:00.216 11:58:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:30:01.150 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3058797 00:30:01.150 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:30:01.150 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3058797 00:30:01.150 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:30:01.150 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:01.150 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:30:01.151 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:01.151 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3058797 00:30:01.151 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:30:01.151 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:01.151 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:01.151 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:01.151 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:30:01.151 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:01.151 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:01.151 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:01.151 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:01.151 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:01.151 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:30:01.151 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:01.151 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:30:01.151 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:01.151 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:01.151 rmmod nvme_tcp 00:30:01.151 rmmod nvme_fabrics 00:30:01.151 rmmod nvme_keyring 00:30:01.151 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:01.151 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:30:01.151 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:30:01.151 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3058487 ']' 00:30:01.151 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3058487 00:30:01.151 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3058487 ']' 00:30:01.151 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3058487 00:30:01.151 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3058487) - No such process 00:30:01.151 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3058487 is not found' 00:30:01.151 Process with pid 3058487 is not found 00:30:01.151 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:01.151 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:01.151 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:01.151 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:30:01.151 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:30:01.151 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:01.151 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:30:01.151 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:01.151 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:01.151 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:01.151 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:01.151 11:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:03.689 00:30:03.689 real 0m13.353s 00:30:03.689 user 0m36.910s 00:30:03.689 sys 0m5.501s 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:03.689 ************************************ 00:30:03.689 END TEST nvmf_shutdown_tc4 00:30:03.689 ************************************ 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:30:03.689 00:30:03.689 real 0m55.480s 00:30:03.689 user 2m50.566s 00:30:03.689 sys 0m13.745s 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:03.689 ************************************ 00:30:03.689 END TEST nvmf_shutdown 00:30:03.689 ************************************ 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:03.689 ************************************ 00:30:03.689 START TEST nvmf_nsid 00:30:03.689 ************************************ 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:30:03.689 * Looking for test storage... 00:30:03.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:03.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.689 --rc genhtml_branch_coverage=1 00:30:03.689 --rc genhtml_function_coverage=1 00:30:03.689 --rc genhtml_legend=1 00:30:03.689 --rc geninfo_all_blocks=1 00:30:03.689 --rc geninfo_unexecuted_blocks=1 00:30:03.689 00:30:03.689 ' 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:03.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.689 --rc genhtml_branch_coverage=1 00:30:03.689 --rc genhtml_function_coverage=1 00:30:03.689 --rc genhtml_legend=1 00:30:03.689 --rc geninfo_all_blocks=1 00:30:03.689 --rc geninfo_unexecuted_blocks=1 00:30:03.689 00:30:03.689 ' 00:30:03.689 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:03.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.689 --rc genhtml_branch_coverage=1 00:30:03.689 --rc genhtml_function_coverage=1 00:30:03.689 --rc genhtml_legend=1 00:30:03.690 --rc geninfo_all_blocks=1 00:30:03.690 --rc geninfo_unexecuted_blocks=1 00:30:03.690 00:30:03.690 ' 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:03.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.690 --rc genhtml_branch_coverage=1 00:30:03.690 --rc genhtml_function_coverage=1 00:30:03.690 --rc genhtml_legend=1 00:30:03.690 --rc geninfo_all_blocks=1 00:30:03.690 --rc geninfo_unexecuted_blocks=1 00:30:03.690 00:30:03.690 ' 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:03.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:30:03.690 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:05.594 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:05.594 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:30:05.594 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:05.595 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:05.595 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:05.595 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:05.595 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:05.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:05.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:30:05.595 00:30:05.595 --- 10.0.0.2 ping statistics --- 00:30:05.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.595 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:05.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:05.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:30:05.595 00:30:05.595 --- 10.0.0.1 ping statistics --- 00:30:05.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.595 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:05.595 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:05.596 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:05.596 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:05.596 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:05.596 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:30:05.596 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:05.596 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:05.596 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:05.596 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3061794 00:30:05.596 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:30:05.596 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3061794 00:30:05.596 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3061794 ']' 00:30:05.596 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.596 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:05.596 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.596 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:05.596 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:05.596 [2024-11-18 11:58:31.446988] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:30:05.596 [2024-11-18 11:58:31.447137] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.856 [2024-11-18 11:58:31.592120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.856 [2024-11-18 11:58:31.721843] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:05.856 [2024-11-18 11:58:31.721942] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:05.856 [2024-11-18 11:58:31.721969] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:05.856 [2024-11-18 11:58:31.721994] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:05.856 [2024-11-18 11:58:31.722012] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:05.856 [2024-11-18 11:58:31.723658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.793 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:06.793 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:30:06.793 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:06.793 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:06.793 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:06.793 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:06.793 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:06.793 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3061942 00:30:06.793 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:30:06.793 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:30:06.794 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:30:06.794 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:30:06.794 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:06.794 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:06.794 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:06.794 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:06.794 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:06.794 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:06.794 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:06.794 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:06.794 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:06.794 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:30:06.794 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:30:06.794 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=dc285880-7c85-4ed9-b40d-da5dc608f814 00:30:06.794 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:30:06.794 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=51fd3c2a-c2a6-4786-b9c4-2cbc5901dc06 00:30:06.794 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:30:06.794 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=b0b1b5ee-d775-4fb4-9abc-a983a06b4931 00:30:06.794 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:30:06.794 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.794 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:06.794 null0 00:30:06.794 null1 00:30:06.794 null2 00:30:06.794 [2024-11-18 11:58:32.481743] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:06.794 [2024-11-18 11:58:32.506051] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:06.794 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.794 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3061942 /var/tmp/tgt2.sock 00:30:06.794 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3061942 ']' 00:30:06.794 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:30:06.794 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:06.794 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:30:06.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:30:06.794 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:06.794 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:06.794 [2024-11-18 11:58:32.548350] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:30:06.794 [2024-11-18 11:58:32.548508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3061942 ] 00:30:07.054 [2024-11-18 11:58:32.703044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.054 [2024-11-18 11:58:32.835029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:07.992 11:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:07.992 11:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:30:07.992 11:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:30:08.562 [2024-11-18 11:58:34.155690] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:08.562 [2024-11-18 11:58:34.172024] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:30:08.562 nvme0n1 nvme0n2 00:30:08.562 nvme1n1 00:30:08.562 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:30:08.562 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:30:08.562 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:09.131 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:30:09.131 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:30:09.131 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:30:09.131 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:30:09.131 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:30:09.131 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:30:09.131 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:30:09.131 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:09.131 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:09.131 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:30:09.131 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:30:09.131 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:30:09.131 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid dc285880-7c85-4ed9-b40d-da5dc608f814 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=dc2858807c854ed9b40dda5dc608f814 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo DC2858807C854ED9B40DDA5DC608F814 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ DC2858807C854ED9B40DDA5DC608F814 == \D\C\2\8\5\8\8\0\7\C\8\5\4\E\D\9\B\4\0\D\D\A\5\D\C\6\0\8\F\8\1\4 ]] 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 51fd3c2a-c2a6-4786-b9c4-2cbc5901dc06 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=51fd3c2ac2a64786b9c42cbc5901dc06 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 51FD3C2AC2A64786B9C42CBC5901DC06 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 51FD3C2AC2A64786B9C42CBC5901DC06 == \5\1\F\D\3\C\2\A\C\2\A\6\4\7\8\6\B\9\C\4\2\C\B\C\5\9\0\1\D\C\0\6 ]] 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid b0b1b5ee-d775-4fb4-9abc-a983a06b4931 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:30:10.067 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:10.327 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=b0b1b5eed7754fb49abca983a06b4931 00:30:10.327 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo B0B1B5EED7754FB49ABCA983A06B4931 00:30:10.327 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ B0B1B5EED7754FB49ABCA983A06B4931 == \B\0\B\1\B\5\E\E\D\7\7\5\4\F\B\4\9\A\B\C\A\9\8\3\A\0\6\B\4\9\3\1 ]] 00:30:10.327 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:30:10.585 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:30:10.585 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:30:10.585 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3061942 00:30:10.585 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3061942 ']' 00:30:10.585 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3061942 00:30:10.585 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:30:10.585 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:10.585 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3061942 00:30:10.585 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:10.585 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:10.585 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3061942' 00:30:10.585 killing process with pid 3061942 00:30:10.585 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3061942 00:30:10.585 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3061942 00:30:13.120 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:30:13.120 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:13.121 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:30:13.121 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:13.121 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:30:13.121 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:13.121 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:13.121 rmmod nvme_tcp 00:30:13.121 rmmod nvme_fabrics 00:30:13.121 rmmod nvme_keyring 00:30:13.121 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:13.121 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:30:13.121 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:30:13.121 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3061794 ']' 00:30:13.121 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3061794 00:30:13.121 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3061794 ']' 00:30:13.121 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3061794 00:30:13.121 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:30:13.121 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:13.121 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3061794 00:30:13.121 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:13.121 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:13.121 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3061794' 00:30:13.121 killing process with pid 3061794 00:30:13.121 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3061794 00:30:13.121 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3061794 00:30:14.059 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:14.059 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:14.059 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:14.059 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:30:14.059 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:30:14.059 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:14.059 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:30:14.059 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:14.059 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:14.059 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:14.059 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:14.059 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.967 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:15.967 00:30:15.967 real 0m12.676s 00:30:15.967 user 0m15.627s 00:30:15.967 sys 0m2.983s 00:30:15.967 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:15.967 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:15.967 ************************************ 00:30:15.967 END TEST nvmf_nsid 00:30:15.967 ************************************ 00:30:15.967 11:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:15.967 00:30:15.967 real 18m40.183s 00:30:15.967 user 51m22.056s 00:30:15.967 sys 3m34.580s 00:30:15.967 11:58:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:15.967 11:58:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:15.967 ************************************ 00:30:15.967 END TEST nvmf_target_extra 00:30:15.967 ************************************ 00:30:15.967 11:58:41 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:15.967 11:58:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:15.967 11:58:41 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:15.967 11:58:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:15.967 ************************************ 00:30:15.967 START TEST nvmf_host 00:30:15.967 ************************************ 00:30:15.967 11:58:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:16.226 * Looking for test storage... 00:30:16.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:16.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.226 --rc genhtml_branch_coverage=1 00:30:16.226 --rc genhtml_function_coverage=1 00:30:16.226 --rc genhtml_legend=1 00:30:16.226 --rc geninfo_all_blocks=1 00:30:16.226 --rc geninfo_unexecuted_blocks=1 00:30:16.226 00:30:16.226 ' 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:16.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.226 --rc genhtml_branch_coverage=1 00:30:16.226 --rc genhtml_function_coverage=1 00:30:16.226 --rc genhtml_legend=1 00:30:16.226 --rc geninfo_all_blocks=1 00:30:16.226 --rc geninfo_unexecuted_blocks=1 00:30:16.226 00:30:16.226 ' 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:16.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.226 --rc genhtml_branch_coverage=1 00:30:16.226 --rc genhtml_function_coverage=1 00:30:16.226 --rc genhtml_legend=1 00:30:16.226 --rc geninfo_all_blocks=1 00:30:16.226 --rc geninfo_unexecuted_blocks=1 00:30:16.226 00:30:16.226 ' 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:16.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.226 --rc genhtml_branch_coverage=1 00:30:16.226 --rc genhtml_function_coverage=1 00:30:16.226 --rc genhtml_legend=1 00:30:16.226 --rc geninfo_all_blocks=1 00:30:16.226 --rc geninfo_unexecuted_blocks=1 00:30:16.226 00:30:16.226 ' 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:16.226 11:58:41 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:16.227 11:58:41 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.227 11:58:41 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.227 11:58:41 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.227 11:58:41 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:30:16.227 11:58:41 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.227 11:58:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:30:16.227 11:58:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:16.227 11:58:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:16.227 11:58:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:16.227 11:58:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:16.227 11:58:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:16.227 11:58:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:16.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:16.227 11:58:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:16.227 11:58:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:16.227 11:58:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:16.227 11:58:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:16.227 11:58:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:30:16.227 11:58:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:30:16.227 11:58:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:16.227 11:58:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:16.227 11:58:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:16.227 11:58:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.227 ************************************ 00:30:16.227 START TEST nvmf_multicontroller 00:30:16.227 ************************************ 00:30:16.227 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:16.227 * Looking for test storage... 00:30:16.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:16.227 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:16.227 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:30:16.227 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:16.486 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:16.486 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:16.486 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:16.486 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:16.486 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:30:16.486 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:30:16.486 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:30:16.486 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:30:16.486 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:30:16.486 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:16.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.487 --rc genhtml_branch_coverage=1 00:30:16.487 --rc genhtml_function_coverage=1 00:30:16.487 --rc genhtml_legend=1 00:30:16.487 --rc geninfo_all_blocks=1 00:30:16.487 --rc geninfo_unexecuted_blocks=1 00:30:16.487 00:30:16.487 ' 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:16.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.487 --rc genhtml_branch_coverage=1 00:30:16.487 --rc genhtml_function_coverage=1 00:30:16.487 --rc genhtml_legend=1 00:30:16.487 --rc geninfo_all_blocks=1 00:30:16.487 --rc geninfo_unexecuted_blocks=1 00:30:16.487 00:30:16.487 ' 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:16.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.487 --rc genhtml_branch_coverage=1 00:30:16.487 --rc genhtml_function_coverage=1 00:30:16.487 --rc genhtml_legend=1 00:30:16.487 --rc geninfo_all_blocks=1 00:30:16.487 --rc geninfo_unexecuted_blocks=1 00:30:16.487 00:30:16.487 ' 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:16.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.487 --rc genhtml_branch_coverage=1 00:30:16.487 --rc genhtml_function_coverage=1 00:30:16.487 --rc genhtml_legend=1 00:30:16.487 --rc geninfo_all_blocks=1 00:30:16.487 --rc geninfo_unexecuted_blocks=1 00:30:16.487 00:30:16.487 ' 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:16.487 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:16.487 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:16.488 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:16.488 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:30:16.488 11:58:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:19.052 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:19.052 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:19.052 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:19.052 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:19.052 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:19.052 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:19.052 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:30:19.052 00:30:19.052 --- 10.0.0.2 ping statistics --- 00:30:19.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:19.052 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:30:19.053 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:19.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:19.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:30:19.053 00:30:19.053 --- 10.0.0.1 ping statistics --- 00:30:19.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:19.053 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:30:19.053 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:19.053 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:30:19.053 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:19.053 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:19.053 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:19.053 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:19.053 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:19.053 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:19.053 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:19.053 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:30:19.053 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:19.053 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:19.053 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.053 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3064781 00:30:19.053 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:19.053 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3064781 00:30:19.053 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3064781 ']' 00:30:19.053 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:19.053 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:19.053 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:19.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:19.053 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:19.053 11:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.053 [2024-11-18 11:58:44.560923] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:30:19.053 [2024-11-18 11:58:44.561083] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:19.053 [2024-11-18 11:58:44.709969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:19.053 [2024-11-18 11:58:44.850972] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:19.053 [2024-11-18 11:58:44.851042] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:19.053 [2024-11-18 11:58:44.851067] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:19.053 [2024-11-18 11:58:44.851092] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:19.053 [2024-11-18 11:58:44.851111] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:19.053 [2024-11-18 11:58:44.853891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:19.053 [2024-11-18 11:58:44.853997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:19.053 [2024-11-18 11:58:44.853998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:19.643 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:19.643 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:30:19.643 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:19.643 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:19.643 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.901 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:19.901 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:19.901 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.901 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.901 [2024-11-18 11:58:45.538924] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:19.901 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.901 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:19.901 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.901 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.901 Malloc0 00:30:19.901 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.901 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:19.901 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.901 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.901 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.901 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.902 [2024-11-18 11:58:45.650341] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.902 [2024-11-18 11:58:45.658232] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.902 Malloc1 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3064950 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3064950 /var/tmp/bdevperf.sock 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3064950 ']' 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:19.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:19.902 11:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.280 11:58:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:21.280 11:58:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:30:21.280 11:58:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:21.280 11:58:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.280 11:58:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.280 NVMe0n1 00:30:21.280 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.280 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:21.280 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:30:21.280 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.280 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.280 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.280 1 00:30:21.280 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.281 request: 00:30:21.281 { 00:30:21.281 "name": "NVMe0", 00:30:21.281 "trtype": "tcp", 00:30:21.281 "traddr": "10.0.0.2", 00:30:21.281 "adrfam": "ipv4", 00:30:21.281 "trsvcid": "4420", 00:30:21.281 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:21.281 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:30:21.281 "hostaddr": "10.0.0.1", 00:30:21.281 "prchk_reftag": false, 00:30:21.281 "prchk_guard": false, 00:30:21.281 "hdgst": false, 00:30:21.281 "ddgst": false, 00:30:21.281 "allow_unrecognized_csi": false, 00:30:21.281 "method": "bdev_nvme_attach_controller", 00:30:21.281 "req_id": 1 00:30:21.281 } 00:30:21.281 Got JSON-RPC error response 00:30:21.281 response: 00:30:21.281 { 00:30:21.281 "code": -114, 00:30:21.281 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:21.281 } 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.281 request: 00:30:21.281 { 00:30:21.281 "name": "NVMe0", 00:30:21.281 "trtype": "tcp", 00:30:21.281 "traddr": "10.0.0.2", 00:30:21.281 "adrfam": "ipv4", 00:30:21.281 "trsvcid": "4420", 00:30:21.281 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:21.281 "hostaddr": "10.0.0.1", 00:30:21.281 "prchk_reftag": false, 00:30:21.281 "prchk_guard": false, 00:30:21.281 "hdgst": false, 00:30:21.281 "ddgst": false, 00:30:21.281 "allow_unrecognized_csi": false, 00:30:21.281 "method": "bdev_nvme_attach_controller", 00:30:21.281 "req_id": 1 00:30:21.281 } 00:30:21.281 Got JSON-RPC error response 00:30:21.281 response: 00:30:21.281 { 00:30:21.281 "code": -114, 00:30:21.281 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:21.281 } 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.281 request: 00:30:21.281 { 00:30:21.281 "name": "NVMe0", 00:30:21.281 "trtype": "tcp", 00:30:21.281 "traddr": "10.0.0.2", 00:30:21.281 "adrfam": "ipv4", 00:30:21.281 "trsvcid": "4420", 00:30:21.281 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:21.281 "hostaddr": "10.0.0.1", 00:30:21.281 "prchk_reftag": false, 00:30:21.281 "prchk_guard": false, 00:30:21.281 "hdgst": false, 00:30:21.281 "ddgst": false, 00:30:21.281 "multipath": "disable", 00:30:21.281 "allow_unrecognized_csi": false, 00:30:21.281 "method": "bdev_nvme_attach_controller", 00:30:21.281 "req_id": 1 00:30:21.281 } 00:30:21.281 Got JSON-RPC error response 00:30:21.281 response: 00:30:21.281 { 00:30:21.281 "code": -114, 00:30:21.281 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:30:21.281 } 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.281 request: 00:30:21.281 { 00:30:21.281 "name": "NVMe0", 00:30:21.281 "trtype": "tcp", 00:30:21.281 "traddr": "10.0.0.2", 00:30:21.281 "adrfam": "ipv4", 00:30:21.281 "trsvcid": "4420", 00:30:21.281 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:21.281 "hostaddr": "10.0.0.1", 00:30:21.281 "prchk_reftag": false, 00:30:21.281 "prchk_guard": false, 00:30:21.281 "hdgst": false, 00:30:21.281 "ddgst": false, 00:30:21.281 "multipath": "failover", 00:30:21.281 "allow_unrecognized_csi": false, 00:30:21.281 "method": "bdev_nvme_attach_controller", 00:30:21.281 "req_id": 1 00:30:21.281 } 00:30:21.281 Got JSON-RPC error response 00:30:21.281 response: 00:30:21.281 { 00:30:21.281 "code": -114, 00:30:21.281 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:21.281 } 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:21.281 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:21.282 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:21.282 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:21.282 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:21.282 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.282 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.541 NVMe0n1 00:30:21.541 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.541 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:21.541 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.541 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.541 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.541 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:21.541 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.541 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.541 00:30:21.541 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.800 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:21.800 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:30:21.800 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.800 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.800 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.800 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:30:21.800 11:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:22.734 { 00:30:22.734 "results": [ 00:30:22.734 { 00:30:22.734 "job": "NVMe0n1", 00:30:22.734 "core_mask": "0x1", 00:30:22.734 "workload": "write", 00:30:22.734 "status": "finished", 00:30:22.735 "queue_depth": 128, 00:30:22.735 "io_size": 4096, 00:30:22.735 "runtime": 1.00735, 00:30:22.735 "iops": 13113.61493026257, 00:30:22.735 "mibps": 51.22505832133817, 00:30:22.735 "io_failed": 0, 00:30:22.735 "io_timeout": 0, 00:30:22.735 "avg_latency_us": 9743.416931953907, 00:30:22.735 "min_latency_us": 2609.303703703704, 00:30:22.735 "max_latency_us": 18835.53185185185 00:30:22.735 } 00:30:22.735 ], 00:30:22.735 "core_count": 1 00:30:22.735 } 00:30:22.735 11:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:30:22.735 11:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.735 11:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:22.735 11:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.735 11:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:30:22.735 11:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3064950 00:30:22.735 11:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3064950 ']' 00:30:22.735 11:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3064950 00:30:22.735 11:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:30:22.735 11:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:22.735 11:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3064950 00:30:22.994 11:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:22.994 11:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:22.994 11:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3064950' 00:30:22.994 killing process with pid 3064950 00:30:22.994 11:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3064950 00:30:22.995 11:58:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3064950 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:30:23.930 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:23.930 [2024-11-18 11:58:45.850187] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:30:23.930 [2024-11-18 11:58:45.850334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3064950 ] 00:30:23.930 [2024-11-18 11:58:45.988382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:23.930 [2024-11-18 11:58:46.116327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.930 [2024-11-18 11:58:47.423308] bdev.c:4691:bdev_name_add: *ERROR*: Bdev name 09b8e82f-0879-4770-8767-75d26b94db40 already exists 00:30:23.930 [2024-11-18 11:58:47.423375] bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:09b8e82f-0879-4770-8767-75d26b94db40 alias for bdev NVMe1n1 00:30:23.930 [2024-11-18 11:58:47.423409] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:30:23.930 Running I/O for 1 seconds... 00:30:23.930 13082.00 IOPS, 51.10 MiB/s 00:30:23.930 Latency(us) 00:30:23.930 [2024-11-18T10:58:49.815Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:23.930 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:30:23.930 NVMe0n1 : 1.01 13113.61 51.23 0.00 0.00 9743.42 2609.30 18835.53 00:30:23.930 [2024-11-18T10:58:49.815Z] =================================================================================================================== 00:30:23.930 [2024-11-18T10:58:49.815Z] Total : 13113.61 51.23 0.00 0.00 9743.42 2609.30 18835.53 00:30:23.930 Received shutdown signal, test time was about 1.000000 seconds 00:30:23.930 00:30:23.930 Latency(us) 00:30:23.930 [2024-11-18T10:58:49.815Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:23.930 [2024-11-18T10:58:49.815Z] =================================================================================================================== 00:30:23.930 [2024-11-18T10:58:49.815Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:23.930 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:23.930 rmmod nvme_tcp 00:30:23.930 rmmod nvme_fabrics 00:30:23.930 rmmod nvme_keyring 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3064781 ']' 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3064781 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3064781 ']' 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3064781 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3064781 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3064781' 00:30:23.930 killing process with pid 3064781 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3064781 00:30:23.930 11:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3064781 00:30:25.308 11:58:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:25.308 11:58:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:25.308 11:58:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:25.309 11:58:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:30:25.309 11:58:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:30:25.309 11:58:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:30:25.309 11:58:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:25.309 11:58:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:25.309 11:58:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:25.309 11:58:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:25.309 11:58:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:25.309 11:58:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.216 11:58:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:27.216 00:30:27.216 real 0m10.997s 00:30:27.216 user 0m22.527s 00:30:27.216 sys 0m2.799s 00:30:27.216 11:58:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:27.216 11:58:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:27.216 ************************************ 00:30:27.216 END TEST nvmf_multicontroller 00:30:27.216 ************************************ 00:30:27.216 11:58:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:27.216 11:58:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:27.216 11:58:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:27.216 11:58:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.216 ************************************ 00:30:27.216 START TEST nvmf_aer 00:30:27.216 ************************************ 00:30:27.216 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:27.475 * Looking for test storage... 00:30:27.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:27.475 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:27.475 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:30:27.475 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:27.475 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:27.475 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:27.475 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:27.475 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:27.475 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:30:27.475 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:30:27.475 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:30:27.475 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:30:27.475 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:30:27.475 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:30:27.475 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:30:27.475 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:27.475 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:30:27.475 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:30:27.475 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:27.475 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:27.475 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:30:27.475 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:30:27.475 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:27.475 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:30:27.475 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:30:27.475 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:30:27.475 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:30:27.475 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:27.475 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:27.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.476 --rc genhtml_branch_coverage=1 00:30:27.476 --rc genhtml_function_coverage=1 00:30:27.476 --rc genhtml_legend=1 00:30:27.476 --rc geninfo_all_blocks=1 00:30:27.476 --rc geninfo_unexecuted_blocks=1 00:30:27.476 00:30:27.476 ' 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:27.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.476 --rc genhtml_branch_coverage=1 00:30:27.476 --rc genhtml_function_coverage=1 00:30:27.476 --rc genhtml_legend=1 00:30:27.476 --rc geninfo_all_blocks=1 00:30:27.476 --rc geninfo_unexecuted_blocks=1 00:30:27.476 00:30:27.476 ' 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:27.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.476 --rc genhtml_branch_coverage=1 00:30:27.476 --rc genhtml_function_coverage=1 00:30:27.476 --rc genhtml_legend=1 00:30:27.476 --rc geninfo_all_blocks=1 00:30:27.476 --rc geninfo_unexecuted_blocks=1 00:30:27.476 00:30:27.476 ' 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:27.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.476 --rc genhtml_branch_coverage=1 00:30:27.476 --rc genhtml_function_coverage=1 00:30:27.476 --rc genhtml_legend=1 00:30:27.476 --rc geninfo_all_blocks=1 00:30:27.476 --rc geninfo_unexecuted_blocks=1 00:30:27.476 00:30:27.476 ' 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:27.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:30:27.476 11:58:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:29.388 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:29.388 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:29.388 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:29.389 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:29.389 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:29.389 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:29.389 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:29.389 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:29.389 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:29.389 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:29.389 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:29.389 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:29.389 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:29.389 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:29.389 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:29.389 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:29.389 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:29.389 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:29.389 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:29.389 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:29.389 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:30:29.389 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:29.389 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:29.389 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:29.389 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:29.389 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:29.389 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:29.389 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:29.389 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:29.389 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:29.389 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:29.389 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:29.389 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:29.389 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:29.389 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:29.389 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:29.389 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:29.389 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:29.389 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:29.389 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:29.389 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:29.647 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:29.647 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:29.647 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:29.647 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:29.647 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:29.647 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:29.647 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:29.647 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:30:29.647 00:30:29.647 --- 10.0.0.2 ping statistics --- 00:30:29.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:29.648 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:30:29.648 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:29.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:29.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:30:29.648 00:30:29.648 --- 10.0.0.1 ping statistics --- 00:30:29.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:29.648 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:30:29.648 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:29.648 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:30:29.648 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:29.648 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:29.648 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:29.648 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:29.648 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:29.648 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:29.648 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:29.648 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:30:29.648 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:29.648 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:29.648 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:29.648 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3067549 00:30:29.648 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:29.648 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3067549 00:30:29.648 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3067549 ']' 00:30:29.648 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:29.648 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:29.648 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:29.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:29.648 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:29.648 11:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:29.648 [2024-11-18 11:58:55.441875] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:30:29.648 [2024-11-18 11:58:55.442014] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:29.907 [2024-11-18 11:58:55.582740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:29.907 [2024-11-18 11:58:55.708117] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:29.907 [2024-11-18 11:58:55.708208] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:29.907 [2024-11-18 11:58:55.708230] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:29.907 [2024-11-18 11:58:55.708251] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:29.907 [2024-11-18 11:58:55.708268] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:29.907 [2024-11-18 11:58:55.710753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:29.907 [2024-11-18 11:58:55.710815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:29.907 [2024-11-18 11:58:55.710860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:29.907 [2024-11-18 11:58:55.710881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:30.844 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:30.844 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:30:30.844 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:30.844 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:30.844 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:30.844 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:30.844 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:30.844 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.844 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:30.844 [2024-11-18 11:58:56.472913] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:30.844 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.844 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:30:30.844 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.844 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:30.844 Malloc0 00:30:30.844 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.844 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:30:30.844 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.844 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:30.844 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.844 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:30.844 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.844 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:30.844 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.844 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:30.845 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.845 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:30.845 [2024-11-18 11:58:56.592435] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:30.845 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.845 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:30:30.845 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.845 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:30.845 [ 00:30:30.845 { 00:30:30.845 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:30.845 "subtype": "Discovery", 00:30:30.845 "listen_addresses": [], 00:30:30.845 "allow_any_host": true, 00:30:30.845 "hosts": [] 00:30:30.845 }, 00:30:30.845 { 00:30:30.845 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:30.845 "subtype": "NVMe", 00:30:30.845 "listen_addresses": [ 00:30:30.845 { 00:30:30.845 "trtype": "TCP", 00:30:30.845 "adrfam": "IPv4", 00:30:30.845 "traddr": "10.0.0.2", 00:30:30.845 "trsvcid": "4420" 00:30:30.845 } 00:30:30.845 ], 00:30:30.845 "allow_any_host": true, 00:30:30.845 "hosts": [], 00:30:30.845 "serial_number": "SPDK00000000000001", 00:30:30.845 "model_number": "SPDK bdev Controller", 00:30:30.845 "max_namespaces": 2, 00:30:30.845 "min_cntlid": 1, 00:30:30.845 "max_cntlid": 65519, 00:30:30.845 "namespaces": [ 00:30:30.845 { 00:30:30.845 "nsid": 1, 00:30:30.845 "bdev_name": "Malloc0", 00:30:30.845 "name": "Malloc0", 00:30:30.845 "nguid": "5AECFA9264F5441AB3F566782CD1698B", 00:30:30.845 "uuid": "5aecfa92-64f5-441a-b3f5-66782cd1698b" 00:30:30.845 } 00:30:30.845 ] 00:30:30.845 } 00:30:30.845 ] 00:30:30.845 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.845 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:30:30.845 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:30:30.845 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3067702 00:30:30.845 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:30:30.845 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:30:30.845 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:30:30.845 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:30.845 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:30:30.845 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:30:30.845 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:30.845 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:30.845 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:30:30.845 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:30:30.845 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:31.104 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:31.104 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:30:31.104 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:30:31.104 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:31.104 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:31.104 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 3 -lt 200 ']' 00:30:31.104 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=4 00:30:31.104 11:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:31.365 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:31.365 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:31.365 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:30:31.365 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:30:31.365 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.365 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:31.365 Malloc1 00:30:31.365 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.365 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:30:31.365 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.365 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:31.365 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.365 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:30:31.365 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.365 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:31.365 [ 00:30:31.365 { 00:30:31.365 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:31.365 "subtype": "Discovery", 00:30:31.365 "listen_addresses": [], 00:30:31.365 "allow_any_host": true, 00:30:31.365 "hosts": [] 00:30:31.365 }, 00:30:31.365 { 00:30:31.365 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:31.365 "subtype": "NVMe", 00:30:31.365 "listen_addresses": [ 00:30:31.365 { 00:30:31.365 "trtype": "TCP", 00:30:31.365 "adrfam": "IPv4", 00:30:31.365 "traddr": "10.0.0.2", 00:30:31.365 "trsvcid": "4420" 00:30:31.365 } 00:30:31.365 ], 00:30:31.365 "allow_any_host": true, 00:30:31.365 "hosts": [], 00:30:31.365 "serial_number": "SPDK00000000000001", 00:30:31.365 "model_number": "SPDK bdev Controller", 00:30:31.365 "max_namespaces": 2, 00:30:31.365 "min_cntlid": 1, 00:30:31.365 "max_cntlid": 65519, 00:30:31.365 "namespaces": [ 00:30:31.365 { 00:30:31.365 "nsid": 1, 00:30:31.365 "bdev_name": "Malloc0", 00:30:31.365 "name": "Malloc0", 00:30:31.365 "nguid": "5AECFA9264F5441AB3F566782CD1698B", 00:30:31.365 "uuid": "5aecfa92-64f5-441a-b3f5-66782cd1698b" 00:30:31.365 }, 00:30:31.365 { 00:30:31.365 "nsid": 2, 00:30:31.365 "bdev_name": "Malloc1", 00:30:31.365 "name": "Malloc1", 00:30:31.365 "nguid": "9359693BB215467CA1D79A4A9BEC310F", 00:30:31.365 "uuid": "9359693b-b215-467c-a1d7-9a4a9bec310f" 00:30:31.365 } 00:30:31.365 ] 00:30:31.365 } 00:30:31.365 ] 00:30:31.365 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.365 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3067702 00:30:31.365 Asynchronous Event Request test 00:30:31.365 Attaching to 10.0.0.2 00:30:31.365 Attached to 10.0.0.2 00:30:31.365 Registering asynchronous event callbacks... 00:30:31.365 Starting namespace attribute notice tests for all controllers... 00:30:31.365 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:30:31.365 aer_cb - Changed Namespace 00:30:31.365 Cleaning up... 00:30:31.624 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:31.624 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.624 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:31.624 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.624 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:30:31.624 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.624 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:31.883 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.883 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:31.883 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.883 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:31.883 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.883 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:30:31.883 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:30:31.883 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:31.883 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:30:31.883 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:31.883 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:30:31.883 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:31.883 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:31.883 rmmod nvme_tcp 00:30:31.883 rmmod nvme_fabrics 00:30:31.883 rmmod nvme_keyring 00:30:31.883 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:31.883 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:30:31.883 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:30:31.883 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3067549 ']' 00:30:31.883 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3067549 00:30:31.883 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3067549 ']' 00:30:31.883 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3067549 00:30:31.883 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:30:31.883 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:31.883 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3067549 00:30:31.883 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:31.883 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:31.883 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3067549' 00:30:31.883 killing process with pid 3067549 00:30:31.883 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3067549 00:30:31.883 11:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3067549 00:30:33.260 11:58:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:33.260 11:58:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:33.260 11:58:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:33.260 11:58:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:30:33.260 11:58:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:30:33.260 11:58:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:33.260 11:58:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:30:33.260 11:58:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:33.260 11:58:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:33.260 11:58:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:33.260 11:58:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:33.260 11:58:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:35.164 11:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:35.164 00:30:35.164 real 0m7.826s 00:30:35.164 user 0m12.226s 00:30:35.164 sys 0m2.233s 00:30:35.164 11:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:35.164 11:59:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:35.164 ************************************ 00:30:35.164 END TEST nvmf_aer 00:30:35.164 ************************************ 00:30:35.164 11:59:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:35.164 11:59:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:35.164 11:59:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:35.164 11:59:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:35.164 ************************************ 00:30:35.164 START TEST nvmf_async_init 00:30:35.164 ************************************ 00:30:35.164 11:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:35.164 * Looking for test storage... 00:30:35.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:35.164 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:35.164 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:30:35.164 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:35.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.424 --rc genhtml_branch_coverage=1 00:30:35.424 --rc genhtml_function_coverage=1 00:30:35.424 --rc genhtml_legend=1 00:30:35.424 --rc geninfo_all_blocks=1 00:30:35.424 --rc geninfo_unexecuted_blocks=1 00:30:35.424 00:30:35.424 ' 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:35.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.424 --rc genhtml_branch_coverage=1 00:30:35.424 --rc genhtml_function_coverage=1 00:30:35.424 --rc genhtml_legend=1 00:30:35.424 --rc geninfo_all_blocks=1 00:30:35.424 --rc geninfo_unexecuted_blocks=1 00:30:35.424 00:30:35.424 ' 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:35.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.424 --rc genhtml_branch_coverage=1 00:30:35.424 --rc genhtml_function_coverage=1 00:30:35.424 --rc genhtml_legend=1 00:30:35.424 --rc geninfo_all_blocks=1 00:30:35.424 --rc geninfo_unexecuted_blocks=1 00:30:35.424 00:30:35.424 ' 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:35.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.424 --rc genhtml_branch_coverage=1 00:30:35.424 --rc genhtml_function_coverage=1 00:30:35.424 --rc genhtml_legend=1 00:30:35.424 --rc geninfo_all_blocks=1 00:30:35.424 --rc geninfo_unexecuted_blocks=1 00:30:35.424 00:30:35.424 ' 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:35.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:30:35.424 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:30:35.425 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:30:35.425 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=cf48611544c54e2b9df81287cbf7b7b6 00:30:35.425 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:30:35.425 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:35.425 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:35.425 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:35.425 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:35.425 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:35.425 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:35.425 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:35.425 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:35.425 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:35.425 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:35.425 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:30:35.425 11:59:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:37.328 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:37.328 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:37.328 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:37.328 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:37.328 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:37.587 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:37.587 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:37.587 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:37.587 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:37.587 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:37.587 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:37.587 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:37.587 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:37.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:37.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:30:37.587 00:30:37.587 --- 10.0.0.2 ping statistics --- 00:30:37.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:37.587 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:30:37.587 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:37.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:37.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:30:37.587 00:30:37.587 --- 10.0.0.1 ping statistics --- 00:30:37.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:37.587 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:30:37.587 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:37.587 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:30:37.587 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:37.587 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:37.587 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:37.587 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:37.587 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:37.587 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:37.587 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:37.587 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:30:37.587 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:37.587 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:37.587 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:37.587 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3069905 00:30:37.587 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:30:37.587 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3069905 00:30:37.587 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3069905 ']' 00:30:37.587 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:37.587 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:37.587 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:37.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:37.587 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:37.587 11:59:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:37.587 [2024-11-18 11:59:03.424690] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:30:37.587 [2024-11-18 11:59:03.424822] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:37.846 [2024-11-18 11:59:03.577419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:37.846 [2024-11-18 11:59:03.712906] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:37.846 [2024-11-18 11:59:03.712997] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:37.846 [2024-11-18 11:59:03.713023] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:37.846 [2024-11-18 11:59:03.713048] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:37.846 [2024-11-18 11:59:03.713069] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:37.846 [2024-11-18 11:59:03.714700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:38.781 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:38.781 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:30:38.781 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:38.781 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:38.781 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:38.781 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:38.781 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:38.781 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.781 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:38.781 [2024-11-18 11:59:04.398116] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:38.781 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.781 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:30:38.781 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.781 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:38.781 null0 00:30:38.781 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.781 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:30:38.781 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.781 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:38.781 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.781 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:30:38.781 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.781 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:38.781 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.781 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g cf48611544c54e2b9df81287cbf7b7b6 00:30:38.781 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.781 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:38.781 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.781 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:38.781 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.781 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:38.781 [2024-11-18 11:59:04.438433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:38.781 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.781 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:30:38.781 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.781 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:39.042 nvme0n1 00:30:39.042 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.042 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:39.042 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.042 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:39.042 [ 00:30:39.042 { 00:30:39.042 "name": "nvme0n1", 00:30:39.042 "aliases": [ 00:30:39.042 "cf486115-44c5-4e2b-9df8-1287cbf7b7b6" 00:30:39.042 ], 00:30:39.042 "product_name": "NVMe disk", 00:30:39.042 "block_size": 512, 00:30:39.042 "num_blocks": 2097152, 00:30:39.042 "uuid": "cf486115-44c5-4e2b-9df8-1287cbf7b7b6", 00:30:39.042 "numa_id": 0, 00:30:39.042 "assigned_rate_limits": { 00:30:39.042 "rw_ios_per_sec": 0, 00:30:39.042 "rw_mbytes_per_sec": 0, 00:30:39.042 "r_mbytes_per_sec": 0, 00:30:39.042 "w_mbytes_per_sec": 0 00:30:39.042 }, 00:30:39.042 "claimed": false, 00:30:39.042 "zoned": false, 00:30:39.042 "supported_io_types": { 00:30:39.042 "read": true, 00:30:39.042 "write": true, 00:30:39.042 "unmap": false, 00:30:39.042 "flush": true, 00:30:39.042 "reset": true, 00:30:39.042 "nvme_admin": true, 00:30:39.042 "nvme_io": true, 00:30:39.042 "nvme_io_md": false, 00:30:39.042 "write_zeroes": true, 00:30:39.042 "zcopy": false, 00:30:39.042 "get_zone_info": false, 00:30:39.042 "zone_management": false, 00:30:39.042 "zone_append": false, 00:30:39.042 "compare": true, 00:30:39.042 "compare_and_write": true, 00:30:39.042 "abort": true, 00:30:39.042 "seek_hole": false, 00:30:39.042 "seek_data": false, 00:30:39.042 "copy": true, 00:30:39.042 "nvme_iov_md": false 00:30:39.042 }, 00:30:39.042 "memory_domains": [ 00:30:39.042 { 00:30:39.042 "dma_device_id": "system", 00:30:39.042 "dma_device_type": 1 00:30:39.042 } 00:30:39.042 ], 00:30:39.042 "driver_specific": { 00:30:39.042 "nvme": [ 00:30:39.042 { 00:30:39.042 "trid": { 00:30:39.042 "trtype": "TCP", 00:30:39.042 "adrfam": "IPv4", 00:30:39.042 "traddr": "10.0.0.2", 00:30:39.042 "trsvcid": "4420", 00:30:39.042 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:39.042 }, 00:30:39.042 "ctrlr_data": { 00:30:39.042 "cntlid": 1, 00:30:39.042 "vendor_id": "0x8086", 00:30:39.042 "model_number": "SPDK bdev Controller", 00:30:39.042 "serial_number": "00000000000000000000", 00:30:39.042 "firmware_revision": "25.01", 00:30:39.042 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:39.042 "oacs": { 00:30:39.042 "security": 0, 00:30:39.042 "format": 0, 00:30:39.042 "firmware": 0, 00:30:39.042 "ns_manage": 0 00:30:39.042 }, 00:30:39.042 "multi_ctrlr": true, 00:30:39.042 "ana_reporting": false 00:30:39.042 }, 00:30:39.042 "vs": { 00:30:39.042 "nvme_version": "1.3" 00:30:39.042 }, 00:30:39.042 "ns_data": { 00:30:39.042 "id": 1, 00:30:39.042 "can_share": true 00:30:39.042 } 00:30:39.042 } 00:30:39.042 ], 00:30:39.042 "mp_policy": "active_passive" 00:30:39.042 } 00:30:39.042 } 00:30:39.042 ] 00:30:39.042 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.042 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:30:39.042 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.042 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:39.042 [2024-11-18 11:59:04.695265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:39.042 [2024-11-18 11:59:04.695388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:30:39.042 [2024-11-18 11:59:04.827746] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:30:39.042 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.042 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:39.042 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.042 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:39.042 [ 00:30:39.042 { 00:30:39.042 "name": "nvme0n1", 00:30:39.042 "aliases": [ 00:30:39.042 "cf486115-44c5-4e2b-9df8-1287cbf7b7b6" 00:30:39.042 ], 00:30:39.042 "product_name": "NVMe disk", 00:30:39.042 "block_size": 512, 00:30:39.042 "num_blocks": 2097152, 00:30:39.042 "uuid": "cf486115-44c5-4e2b-9df8-1287cbf7b7b6", 00:30:39.042 "numa_id": 0, 00:30:39.042 "assigned_rate_limits": { 00:30:39.042 "rw_ios_per_sec": 0, 00:30:39.042 "rw_mbytes_per_sec": 0, 00:30:39.042 "r_mbytes_per_sec": 0, 00:30:39.042 "w_mbytes_per_sec": 0 00:30:39.042 }, 00:30:39.042 "claimed": false, 00:30:39.042 "zoned": false, 00:30:39.042 "supported_io_types": { 00:30:39.042 "read": true, 00:30:39.042 "write": true, 00:30:39.042 "unmap": false, 00:30:39.042 "flush": true, 00:30:39.042 "reset": true, 00:30:39.043 "nvme_admin": true, 00:30:39.043 "nvme_io": true, 00:30:39.043 "nvme_io_md": false, 00:30:39.043 "write_zeroes": true, 00:30:39.043 "zcopy": false, 00:30:39.043 "get_zone_info": false, 00:30:39.043 "zone_management": false, 00:30:39.043 "zone_append": false, 00:30:39.043 "compare": true, 00:30:39.043 "compare_and_write": true, 00:30:39.043 "abort": true, 00:30:39.043 "seek_hole": false, 00:30:39.043 "seek_data": false, 00:30:39.043 "copy": true, 00:30:39.043 "nvme_iov_md": false 00:30:39.043 }, 00:30:39.043 "memory_domains": [ 00:30:39.043 { 00:30:39.043 "dma_device_id": "system", 00:30:39.043 "dma_device_type": 1 00:30:39.043 } 00:30:39.043 ], 00:30:39.043 "driver_specific": { 00:30:39.043 "nvme": [ 00:30:39.043 { 00:30:39.043 "trid": { 00:30:39.043 "trtype": "TCP", 00:30:39.043 "adrfam": "IPv4", 00:30:39.043 "traddr": "10.0.0.2", 00:30:39.043 "trsvcid": "4420", 00:30:39.043 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:39.043 }, 00:30:39.043 "ctrlr_data": { 00:30:39.043 "cntlid": 2, 00:30:39.043 "vendor_id": "0x8086", 00:30:39.043 "model_number": "SPDK bdev Controller", 00:30:39.043 "serial_number": "00000000000000000000", 00:30:39.043 "firmware_revision": "25.01", 00:30:39.043 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:39.043 "oacs": { 00:30:39.043 "security": 0, 00:30:39.043 "format": 0, 00:30:39.043 "firmware": 0, 00:30:39.043 "ns_manage": 0 00:30:39.043 }, 00:30:39.043 "multi_ctrlr": true, 00:30:39.043 "ana_reporting": false 00:30:39.043 }, 00:30:39.043 "vs": { 00:30:39.043 "nvme_version": "1.3" 00:30:39.043 }, 00:30:39.043 "ns_data": { 00:30:39.043 "id": 1, 00:30:39.043 "can_share": true 00:30:39.043 } 00:30:39.043 } 00:30:39.043 ], 00:30:39.043 "mp_policy": "active_passive" 00:30:39.043 } 00:30:39.043 } 00:30:39.043 ] 00:30:39.043 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.043 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:39.043 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.043 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:39.043 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.043 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:30:39.043 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.6u3SSrMFn3 00:30:39.043 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:39.043 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.6u3SSrMFn3 00:30:39.043 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.6u3SSrMFn3 00:30:39.043 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.043 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:39.043 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.043 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:30:39.043 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.043 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:39.043 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.043 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:30:39.043 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.043 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:39.043 [2024-11-18 11:59:04.888031] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:39.043 [2024-11-18 11:59:04.888244] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:39.043 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.043 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:30:39.043 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.043 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:39.043 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.043 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:30:39.043 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.043 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:39.043 [2024-11-18 11:59:04.904084] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:39.303 nvme0n1 00:30:39.303 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.303 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:39.303 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.303 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:39.303 [ 00:30:39.303 { 00:30:39.303 "name": "nvme0n1", 00:30:39.303 "aliases": [ 00:30:39.303 "cf486115-44c5-4e2b-9df8-1287cbf7b7b6" 00:30:39.303 ], 00:30:39.303 "product_name": "NVMe disk", 00:30:39.303 "block_size": 512, 00:30:39.303 "num_blocks": 2097152, 00:30:39.303 "uuid": "cf486115-44c5-4e2b-9df8-1287cbf7b7b6", 00:30:39.303 "numa_id": 0, 00:30:39.303 "assigned_rate_limits": { 00:30:39.303 "rw_ios_per_sec": 0, 00:30:39.303 "rw_mbytes_per_sec": 0, 00:30:39.304 "r_mbytes_per_sec": 0, 00:30:39.304 "w_mbytes_per_sec": 0 00:30:39.304 }, 00:30:39.304 "claimed": false, 00:30:39.304 "zoned": false, 00:30:39.304 "supported_io_types": { 00:30:39.304 "read": true, 00:30:39.304 "write": true, 00:30:39.304 "unmap": false, 00:30:39.304 "flush": true, 00:30:39.304 "reset": true, 00:30:39.304 "nvme_admin": true, 00:30:39.304 "nvme_io": true, 00:30:39.304 "nvme_io_md": false, 00:30:39.304 "write_zeroes": true, 00:30:39.304 "zcopy": false, 00:30:39.304 "get_zone_info": false, 00:30:39.304 "zone_management": false, 00:30:39.304 "zone_append": false, 00:30:39.304 "compare": true, 00:30:39.304 "compare_and_write": true, 00:30:39.304 "abort": true, 00:30:39.304 "seek_hole": false, 00:30:39.304 "seek_data": false, 00:30:39.304 "copy": true, 00:30:39.304 "nvme_iov_md": false 00:30:39.304 }, 00:30:39.304 "memory_domains": [ 00:30:39.304 { 00:30:39.304 "dma_device_id": "system", 00:30:39.304 "dma_device_type": 1 00:30:39.304 } 00:30:39.304 ], 00:30:39.304 "driver_specific": { 00:30:39.304 "nvme": [ 00:30:39.304 { 00:30:39.304 "trid": { 00:30:39.304 "trtype": "TCP", 00:30:39.304 "adrfam": "IPv4", 00:30:39.304 "traddr": "10.0.0.2", 00:30:39.304 "trsvcid": "4421", 00:30:39.304 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:39.304 }, 00:30:39.304 "ctrlr_data": { 00:30:39.304 "cntlid": 3, 00:30:39.304 "vendor_id": "0x8086", 00:30:39.304 "model_number": "SPDK bdev Controller", 00:30:39.304 "serial_number": "00000000000000000000", 00:30:39.304 "firmware_revision": "25.01", 00:30:39.304 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:39.304 "oacs": { 00:30:39.304 "security": 0, 00:30:39.304 "format": 0, 00:30:39.304 "firmware": 0, 00:30:39.304 "ns_manage": 0 00:30:39.304 }, 00:30:39.304 "multi_ctrlr": true, 00:30:39.304 "ana_reporting": false 00:30:39.304 }, 00:30:39.304 "vs": { 00:30:39.304 "nvme_version": "1.3" 00:30:39.304 }, 00:30:39.304 "ns_data": { 00:30:39.304 "id": 1, 00:30:39.304 "can_share": true 00:30:39.304 } 00:30:39.304 } 00:30:39.304 ], 00:30:39.304 "mp_policy": "active_passive" 00:30:39.304 } 00:30:39.304 } 00:30:39.304 ] 00:30:39.304 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.304 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:39.304 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.304 11:59:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:39.304 11:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.304 11:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.6u3SSrMFn3 00:30:39.304 11:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:30:39.304 11:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:30:39.304 11:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:39.304 11:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:30:39.304 11:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:39.304 11:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:30:39.304 11:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:39.304 11:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:39.304 rmmod nvme_tcp 00:30:39.304 rmmod nvme_fabrics 00:30:39.304 rmmod nvme_keyring 00:30:39.304 11:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:39.304 11:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:30:39.304 11:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:30:39.304 11:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3069905 ']' 00:30:39.304 11:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3069905 00:30:39.304 11:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3069905 ']' 00:30:39.304 11:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3069905 00:30:39.304 11:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:30:39.304 11:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:39.304 11:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3069905 00:30:39.304 11:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:39.304 11:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:39.304 11:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3069905' 00:30:39.304 killing process with pid 3069905 00:30:39.304 11:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3069905 00:30:39.304 11:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3069905 00:30:40.685 11:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:40.686 11:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:40.686 11:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:40.686 11:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:30:40.686 11:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:30:40.686 11:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:40.686 11:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:30:40.686 11:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:40.686 11:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:40.686 11:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:40.686 11:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:40.686 11:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.590 11:59:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:42.590 00:30:42.590 real 0m7.312s 00:30:42.590 user 0m3.944s 00:30:42.590 sys 0m2.046s 00:30:42.590 11:59:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:42.590 11:59:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:42.590 ************************************ 00:30:42.590 END TEST nvmf_async_init 00:30:42.590 ************************************ 00:30:42.590 11:59:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:42.590 11:59:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:42.590 11:59:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:42.590 11:59:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.590 ************************************ 00:30:42.590 START TEST dma 00:30:42.590 ************************************ 00:30:42.590 11:59:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:42.590 * Looking for test storage... 00:30:42.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:42.590 11:59:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:42.590 11:59:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:30:42.590 11:59:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:42.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.591 --rc genhtml_branch_coverage=1 00:30:42.591 --rc genhtml_function_coverage=1 00:30:42.591 --rc genhtml_legend=1 00:30:42.591 --rc geninfo_all_blocks=1 00:30:42.591 --rc geninfo_unexecuted_blocks=1 00:30:42.591 00:30:42.591 ' 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:42.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.591 --rc genhtml_branch_coverage=1 00:30:42.591 --rc genhtml_function_coverage=1 00:30:42.591 --rc genhtml_legend=1 00:30:42.591 --rc geninfo_all_blocks=1 00:30:42.591 --rc geninfo_unexecuted_blocks=1 00:30:42.591 00:30:42.591 ' 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:42.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.591 --rc genhtml_branch_coverage=1 00:30:42.591 --rc genhtml_function_coverage=1 00:30:42.591 --rc genhtml_legend=1 00:30:42.591 --rc geninfo_all_blocks=1 00:30:42.591 --rc geninfo_unexecuted_blocks=1 00:30:42.591 00:30:42.591 ' 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:42.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.591 --rc genhtml_branch_coverage=1 00:30:42.591 --rc genhtml_function_coverage=1 00:30:42.591 --rc genhtml_legend=1 00:30:42.591 --rc geninfo_all_blocks=1 00:30:42.591 --rc geninfo_unexecuted_blocks=1 00:30:42.591 00:30:42.591 ' 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:42.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:30:42.591 00:30:42.591 real 0m0.141s 00:30:42.591 user 0m0.089s 00:30:42.591 sys 0m0.060s 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:42.591 ************************************ 00:30:42.591 END TEST dma 00:30:42.591 ************************************ 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:42.591 11:59:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.850 ************************************ 00:30:42.850 START TEST nvmf_identify 00:30:42.850 ************************************ 00:30:42.850 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:42.850 * Looking for test storage... 00:30:42.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:42.850 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:42.850 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:30:42.850 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:42.850 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:42.850 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:42.850 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:42.850 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:42.850 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:30:42.850 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:30:42.850 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:30:42.850 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:30:42.850 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:30:42.850 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:30:42.850 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:30:42.850 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:42.850 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:30:42.850 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:30:42.850 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:42.850 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:42.850 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:30:42.850 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:30:42.850 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:42.850 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:30:42.850 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:42.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.851 --rc genhtml_branch_coverage=1 00:30:42.851 --rc genhtml_function_coverage=1 00:30:42.851 --rc genhtml_legend=1 00:30:42.851 --rc geninfo_all_blocks=1 00:30:42.851 --rc geninfo_unexecuted_blocks=1 00:30:42.851 00:30:42.851 ' 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:42.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.851 --rc genhtml_branch_coverage=1 00:30:42.851 --rc genhtml_function_coverage=1 00:30:42.851 --rc genhtml_legend=1 00:30:42.851 --rc geninfo_all_blocks=1 00:30:42.851 --rc geninfo_unexecuted_blocks=1 00:30:42.851 00:30:42.851 ' 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:42.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.851 --rc genhtml_branch_coverage=1 00:30:42.851 --rc genhtml_function_coverage=1 00:30:42.851 --rc genhtml_legend=1 00:30:42.851 --rc geninfo_all_blocks=1 00:30:42.851 --rc geninfo_unexecuted_blocks=1 00:30:42.851 00:30:42.851 ' 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:42.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.851 --rc genhtml_branch_coverage=1 00:30:42.851 --rc genhtml_function_coverage=1 00:30:42.851 --rc genhtml_legend=1 00:30:42.851 --rc geninfo_all_blocks=1 00:30:42.851 --rc geninfo_unexecuted_blocks=1 00:30:42.851 00:30:42.851 ' 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:42.851 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:30:42.851 11:59:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:45.382 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:45.382 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:30:45.382 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:45.382 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:45.382 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:45.382 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:45.382 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:45.382 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:30:45.382 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:45.382 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:30:45.382 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:30:45.382 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:30:45.382 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:30:45.382 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:30:45.382 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:30:45.382 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:45.382 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:45.382 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:45.382 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:45.382 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:45.382 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:45.382 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:45.382 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:45.382 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:45.382 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:45.382 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:45.382 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:45.382 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:45.382 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:45.382 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:45.382 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:45.382 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:45.383 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:45.383 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:45.383 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:45.383 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:45.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:45.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:30:45.383 00:30:45.383 --- 10.0.0.2 ping statistics --- 00:30:45.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.383 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:45.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:45.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:30:45.383 00:30:45.383 --- 10.0.0.1 ping statistics --- 00:30:45.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.383 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3072183 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3072183 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 3072183 ']' 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:45.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:45.383 11:59:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:45.383 [2024-11-18 11:59:10.954503] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:30:45.383 [2024-11-18 11:59:10.954635] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:45.383 [2024-11-18 11:59:11.099426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:45.383 [2024-11-18 11:59:11.240102] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:45.383 [2024-11-18 11:59:11.240182] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:45.383 [2024-11-18 11:59:11.240207] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:45.383 [2024-11-18 11:59:11.240231] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:45.383 [2024-11-18 11:59:11.240251] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:45.383 [2024-11-18 11:59:11.243153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:45.383 [2024-11-18 11:59:11.243221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:45.383 [2024-11-18 11:59:11.243309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.383 [2024-11-18 11:59:11.243315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:46.320 11:59:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:46.320 11:59:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:30:46.320 11:59:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:46.320 11:59:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.320 11:59:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:46.320 [2024-11-18 11:59:11.968517] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:46.320 11:59:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.320 11:59:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:30:46.320 11:59:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:46.320 11:59:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:46.320 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:46.320 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.320 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:46.320 Malloc0 00:30:46.320 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.320 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:46.320 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.320 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:46.320 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.320 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:30:46.320 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.320 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:46.320 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.320 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:46.320 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.320 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:46.320 [2024-11-18 11:59:12.103612] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:46.320 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.320 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:46.320 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.320 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:46.320 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.320 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:30:46.320 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.320 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:46.320 [ 00:30:46.320 { 00:30:46.320 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:46.320 "subtype": "Discovery", 00:30:46.320 "listen_addresses": [ 00:30:46.320 { 00:30:46.320 "trtype": "TCP", 00:30:46.320 "adrfam": "IPv4", 00:30:46.320 "traddr": "10.0.0.2", 00:30:46.320 "trsvcid": "4420" 00:30:46.320 } 00:30:46.320 ], 00:30:46.320 "allow_any_host": true, 00:30:46.320 "hosts": [] 00:30:46.320 }, 00:30:46.320 { 00:30:46.320 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:46.320 "subtype": "NVMe", 00:30:46.320 "listen_addresses": [ 00:30:46.320 { 00:30:46.320 "trtype": "TCP", 00:30:46.320 "adrfam": "IPv4", 00:30:46.320 "traddr": "10.0.0.2", 00:30:46.320 "trsvcid": "4420" 00:30:46.320 } 00:30:46.320 ], 00:30:46.320 "allow_any_host": true, 00:30:46.320 "hosts": [], 00:30:46.320 "serial_number": "SPDK00000000000001", 00:30:46.320 "model_number": "SPDK bdev Controller", 00:30:46.320 "max_namespaces": 32, 00:30:46.320 "min_cntlid": 1, 00:30:46.320 "max_cntlid": 65519, 00:30:46.320 "namespaces": [ 00:30:46.320 { 00:30:46.320 "nsid": 1, 00:30:46.320 "bdev_name": "Malloc0", 00:30:46.320 "name": "Malloc0", 00:30:46.320 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:30:46.320 "eui64": "ABCDEF0123456789", 00:30:46.320 "uuid": "86a1330c-6886-410c-b222-17baad355ffc" 00:30:46.320 } 00:30:46.320 ] 00:30:46.320 } 00:30:46.320 ] 00:30:46.320 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.320 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:30:46.320 [2024-11-18 11:59:12.167132] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:30:46.320 [2024-11-18 11:59:12.167225] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3072351 ] 00:30:46.582 [2024-11-18 11:59:12.248368] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:30:46.582 [2024-11-18 11:59:12.248509] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:46.582 [2024-11-18 11:59:12.248533] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:46.582 [2024-11-18 11:59:12.248583] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:46.582 [2024-11-18 11:59:12.248608] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:46.582 [2024-11-18 11:59:12.249354] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:30:46.582 [2024-11-18 11:59:12.249437] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:30:46.582 [2024-11-18 11:59:12.263523] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:46.582 [2024-11-18 11:59:12.263559] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:46.582 [2024-11-18 11:59:12.263576] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:46.582 [2024-11-18 11:59:12.263588] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:46.582 [2024-11-18 11:59:12.263657] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.582 [2024-11-18 11:59:12.263677] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.582 [2024-11-18 11:59:12.263690] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:46.582 [2024-11-18 11:59:12.263726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:46.582 [2024-11-18 11:59:12.263766] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:46.582 [2024-11-18 11:59:12.271528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.582 [2024-11-18 11:59:12.271572] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.582 [2024-11-18 11:59:12.271587] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.582 [2024-11-18 11:59:12.271600] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:46.582 [2024-11-18 11:59:12.271627] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:46.582 [2024-11-18 11:59:12.271649] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:30:46.582 [2024-11-18 11:59:12.271672] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:30:46.582 [2024-11-18 11:59:12.271698] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.582 [2024-11-18 11:59:12.271719] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.582 [2024-11-18 11:59:12.271733] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:46.582 [2024-11-18 11:59:12.271754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.582 [2024-11-18 11:59:12.271789] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:46.582 [2024-11-18 11:59:12.271987] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.582 [2024-11-18 11:59:12.272010] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.582 [2024-11-18 11:59:12.272023] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.582 [2024-11-18 11:59:12.272043] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:46.582 [2024-11-18 11:59:12.272071] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:30:46.582 [2024-11-18 11:59:12.272096] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:30:46.582 [2024-11-18 11:59:12.272132] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.582 [2024-11-18 11:59:12.272151] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.582 [2024-11-18 11:59:12.272165] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:46.582 [2024-11-18 11:59:12.272188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.582 [2024-11-18 11:59:12.272222] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:46.582 [2024-11-18 11:59:12.272375] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.582 [2024-11-18 11:59:12.272396] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.582 [2024-11-18 11:59:12.272409] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.582 [2024-11-18 11:59:12.272420] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:46.582 [2024-11-18 11:59:12.272436] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:30:46.582 [2024-11-18 11:59:12.272460] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:46.582 [2024-11-18 11:59:12.272486] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.582 [2024-11-18 11:59:12.272510] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.582 [2024-11-18 11:59:12.272522] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:46.582 [2024-11-18 11:59:12.272542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.582 [2024-11-18 11:59:12.272580] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:46.582 [2024-11-18 11:59:12.272712] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.582 [2024-11-18 11:59:12.272734] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.582 [2024-11-18 11:59:12.272746] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.582 [2024-11-18 11:59:12.272762] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:46.582 [2024-11-18 11:59:12.272780] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:46.582 [2024-11-18 11:59:12.272808] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.582 [2024-11-18 11:59:12.272824] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.582 [2024-11-18 11:59:12.272836] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:46.582 [2024-11-18 11:59:12.272856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.582 [2024-11-18 11:59:12.272909] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:46.582 [2024-11-18 11:59:12.273096] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.582 [2024-11-18 11:59:12.273117] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.582 [2024-11-18 11:59:12.273129] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.582 [2024-11-18 11:59:12.273140] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:46.582 [2024-11-18 11:59:12.273160] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:46.582 [2024-11-18 11:59:12.273183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:46.582 [2024-11-18 11:59:12.273206] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:46.582 [2024-11-18 11:59:12.273324] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:30:46.582 [2024-11-18 11:59:12.273339] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:46.582 [2024-11-18 11:59:12.273361] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.582 [2024-11-18 11:59:12.273390] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.582 [2024-11-18 11:59:12.273401] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:46.582 [2024-11-18 11:59:12.273420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.582 [2024-11-18 11:59:12.273460] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:46.582 [2024-11-18 11:59:12.273614] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.582 [2024-11-18 11:59:12.273637] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.583 [2024-11-18 11:59:12.273649] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.583 [2024-11-18 11:59:12.273660] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:46.583 [2024-11-18 11:59:12.273676] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:46.583 [2024-11-18 11:59:12.273703] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.583 [2024-11-18 11:59:12.273730] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.583 [2024-11-18 11:59:12.273744] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:46.583 [2024-11-18 11:59:12.273765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.583 [2024-11-18 11:59:12.273811] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:46.583 [2024-11-18 11:59:12.274014] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.583 [2024-11-18 11:59:12.274036] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.583 [2024-11-18 11:59:12.274048] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.583 [2024-11-18 11:59:12.274059] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:46.583 [2024-11-18 11:59:12.274074] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:46.583 [2024-11-18 11:59:12.274089] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:46.583 [2024-11-18 11:59:12.274110] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:30:46.583 [2024-11-18 11:59:12.274142] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:46.583 [2024-11-18 11:59:12.274190] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.583 [2024-11-18 11:59:12.274207] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:46.583 [2024-11-18 11:59:12.274227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.583 [2024-11-18 11:59:12.274263] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:46.583 [2024-11-18 11:59:12.274476] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:46.583 [2024-11-18 11:59:12.274508] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:46.583 [2024-11-18 11:59:12.274522] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:46.583 [2024-11-18 11:59:12.274534] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:30:46.583 [2024-11-18 11:59:12.274548] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:46.583 [2024-11-18 11:59:12.274566] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.583 [2024-11-18 11:59:12.274597] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:46.583 [2024-11-18 11:59:12.274614] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:46.583 [2024-11-18 11:59:12.314648] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.583 [2024-11-18 11:59:12.314678] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.583 [2024-11-18 11:59:12.314691] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.583 [2024-11-18 11:59:12.314704] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:46.583 [2024-11-18 11:59:12.314730] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:30:46.583 [2024-11-18 11:59:12.314747] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:30:46.583 [2024-11-18 11:59:12.314776] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:30:46.583 [2024-11-18 11:59:12.314797] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:30:46.583 [2024-11-18 11:59:12.314811] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:30:46.583 [2024-11-18 11:59:12.314825] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:30:46.583 [2024-11-18 11:59:12.314862] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:46.583 [2024-11-18 11:59:12.314884] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.583 [2024-11-18 11:59:12.314899] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.583 [2024-11-18 11:59:12.314933] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:46.583 [2024-11-18 11:59:12.314960] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:46.583 [2024-11-18 11:59:12.314996] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:46.583 [2024-11-18 11:59:12.315147] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.583 [2024-11-18 11:59:12.315169] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.583 [2024-11-18 11:59:12.315182] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.583 [2024-11-18 11:59:12.315193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:46.583 [2024-11-18 11:59:12.315214] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.583 [2024-11-18 11:59:12.315229] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.583 [2024-11-18 11:59:12.315241] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:46.583 [2024-11-18 11:59:12.315260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.583 [2024-11-18 11:59:12.315284] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.583 [2024-11-18 11:59:12.315297] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.583 [2024-11-18 11:59:12.315324] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:30:46.583 [2024-11-18 11:59:12.315351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.583 [2024-11-18 11:59:12.315367] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.583 [2024-11-18 11:59:12.315379] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.583 [2024-11-18 11:59:12.315390] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:30:46.583 [2024-11-18 11:59:12.315406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.583 [2024-11-18 11:59:12.315438] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.583 [2024-11-18 11:59:12.315450] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.583 [2024-11-18 11:59:12.315461] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:46.583 [2024-11-18 11:59:12.319517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.583 [2024-11-18 11:59:12.319542] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:46.583 [2024-11-18 11:59:12.319573] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:46.583 [2024-11-18 11:59:12.319594] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.583 [2024-11-18 11:59:12.319616] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:46.583 [2024-11-18 11:59:12.319636] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.583 [2024-11-18 11:59:12.319671] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:46.583 [2024-11-18 11:59:12.319706] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:30:46.583 [2024-11-18 11:59:12.319719] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:30:46.583 [2024-11-18 11:59:12.319732] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:46.583 [2024-11-18 11:59:12.319745] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:46.583 [2024-11-18 11:59:12.319943] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.583 [2024-11-18 11:59:12.319965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.583 [2024-11-18 11:59:12.319978] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.583 [2024-11-18 11:59:12.320004] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:46.583 [2024-11-18 11:59:12.320021] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:30:46.583 [2024-11-18 11:59:12.320048] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:30:46.583 [2024-11-18 11:59:12.320081] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.583 [2024-11-18 11:59:12.320098] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:46.583 [2024-11-18 11:59:12.320118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.583 [2024-11-18 11:59:12.320149] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:46.583 [2024-11-18 11:59:12.320333] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:46.583 [2024-11-18 11:59:12.320357] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:46.583 [2024-11-18 11:59:12.320376] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:46.583 [2024-11-18 11:59:12.320389] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:46.583 [2024-11-18 11:59:12.320402] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:46.583 [2024-11-18 11:59:12.320416] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.583 [2024-11-18 11:59:12.320435] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:46.583 [2024-11-18 11:59:12.320449] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:46.583 [2024-11-18 11:59:12.320480] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.583 [2024-11-18 11:59:12.320507] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.583 [2024-11-18 11:59:12.320520] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.583 [2024-11-18 11:59:12.320533] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:46.584 [2024-11-18 11:59:12.320569] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:30:46.584 [2024-11-18 11:59:12.320640] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.584 [2024-11-18 11:59:12.320658] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:46.584 [2024-11-18 11:59:12.320684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.584 [2024-11-18 11:59:12.320706] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.584 [2024-11-18 11:59:12.320720] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.584 [2024-11-18 11:59:12.320732] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:46.584 [2024-11-18 11:59:12.320750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.584 [2024-11-18 11:59:12.320803] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:46.584 [2024-11-18 11:59:12.320823] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:46.584 [2024-11-18 11:59:12.321201] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:46.584 [2024-11-18 11:59:12.321224] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:46.584 [2024-11-18 11:59:12.321237] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:46.584 [2024-11-18 11:59:12.321255] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=1024, cccid=4 00:30:46.584 [2024-11-18 11:59:12.321268] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=1024 00:30:46.584 [2024-11-18 11:59:12.321280] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.584 [2024-11-18 11:59:12.321301] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:46.584 [2024-11-18 11:59:12.321316] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:46.584 [2024-11-18 11:59:12.321331] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.584 [2024-11-18 11:59:12.321347] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.584 [2024-11-18 11:59:12.321358] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.584 [2024-11-18 11:59:12.321369] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:46.584 [2024-11-18 11:59:12.363511] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.584 [2024-11-18 11:59:12.363545] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.584 [2024-11-18 11:59:12.363573] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.584 [2024-11-18 11:59:12.363589] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:46.584 [2024-11-18 11:59:12.363633] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.584 [2024-11-18 11:59:12.363652] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:46.584 [2024-11-18 11:59:12.363674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.584 [2024-11-18 11:59:12.363727] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:46.584 [2024-11-18 11:59:12.363960] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:46.584 [2024-11-18 11:59:12.363990] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:46.584 [2024-11-18 11:59:12.364003] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:46.584 [2024-11-18 11:59:12.364014] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=3072, cccid=4 00:30:46.584 [2024-11-18 11:59:12.364026] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=3072 00:30:46.584 [2024-11-18 11:59:12.364037] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.584 [2024-11-18 11:59:12.364071] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:46.584 [2024-11-18 11:59:12.364087] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:46.584 [2024-11-18 11:59:12.404637] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.584 [2024-11-18 11:59:12.404665] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.584 [2024-11-18 11:59:12.404691] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.584 [2024-11-18 11:59:12.404704] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:46.584 [2024-11-18 11:59:12.404733] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.584 [2024-11-18 11:59:12.404750] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:46.584 [2024-11-18 11:59:12.404772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.584 [2024-11-18 11:59:12.404826] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:46.584 [2024-11-18 11:59:12.405004] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:46.584 [2024-11-18 11:59:12.405031] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:46.584 [2024-11-18 11:59:12.405044] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:46.584 [2024-11-18 11:59:12.405055] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8, cccid=4 00:30:46.584 [2024-11-18 11:59:12.405068] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=8 00:30:46.584 [2024-11-18 11:59:12.405079] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.584 [2024-11-18 11:59:12.405095] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:46.584 [2024-11-18 11:59:12.405108] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:46.584 [2024-11-18 11:59:12.449515] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.584 [2024-11-18 11:59:12.449559] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.584 [2024-11-18 11:59:12.449573] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.584 [2024-11-18 11:59:12.449585] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:46.584 ===================================================== 00:30:46.584 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:46.584 ===================================================== 00:30:46.584 Controller Capabilities/Features 00:30:46.584 ================================ 00:30:46.584 Vendor ID: 0000 00:30:46.584 Subsystem Vendor ID: 0000 00:30:46.584 Serial Number: .................... 00:30:46.584 Model Number: ........................................ 00:30:46.584 Firmware Version: 25.01 00:30:46.584 Recommended Arb Burst: 0 00:30:46.584 IEEE OUI Identifier: 00 00 00 00:30:46.584 Multi-path I/O 00:30:46.584 May have multiple subsystem ports: No 00:30:46.584 May have multiple controllers: No 00:30:46.584 Associated with SR-IOV VF: No 00:30:46.584 Max Data Transfer Size: 131072 00:30:46.584 Max Number of Namespaces: 0 00:30:46.584 Max Number of I/O Queues: 1024 00:30:46.584 NVMe Specification Version (VS): 1.3 00:30:46.584 NVMe Specification Version (Identify): 1.3 00:30:46.584 Maximum Queue Entries: 128 00:30:46.584 Contiguous Queues Required: Yes 00:30:46.584 Arbitration Mechanisms Supported 00:30:46.584 Weighted Round Robin: Not Supported 00:30:46.584 Vendor Specific: Not Supported 00:30:46.584 Reset Timeout: 15000 ms 00:30:46.584 Doorbell Stride: 4 bytes 00:30:46.584 NVM Subsystem Reset: Not Supported 00:30:46.584 Command Sets Supported 00:30:46.584 NVM Command Set: Supported 00:30:46.584 Boot Partition: Not Supported 00:30:46.584 Memory Page Size Minimum: 4096 bytes 00:30:46.584 Memory Page Size Maximum: 4096 bytes 00:30:46.584 Persistent Memory Region: Not Supported 00:30:46.584 Optional Asynchronous Events Supported 00:30:46.584 Namespace Attribute Notices: Not Supported 00:30:46.584 Firmware Activation Notices: Not Supported 00:30:46.584 ANA Change Notices: Not Supported 00:30:46.584 PLE Aggregate Log Change Notices: Not Supported 00:30:46.584 LBA Status Info Alert Notices: Not Supported 00:30:46.584 EGE Aggregate Log Change Notices: Not Supported 00:30:46.584 Normal NVM Subsystem Shutdown event: Not Supported 00:30:46.584 Zone Descriptor Change Notices: Not Supported 00:30:46.584 Discovery Log Change Notices: Supported 00:30:46.584 Controller Attributes 00:30:46.584 128-bit Host Identifier: Not Supported 00:30:46.584 Non-Operational Permissive Mode: Not Supported 00:30:46.584 NVM Sets: Not Supported 00:30:46.584 Read Recovery Levels: Not Supported 00:30:46.584 Endurance Groups: Not Supported 00:30:46.584 Predictable Latency Mode: Not Supported 00:30:46.584 Traffic Based Keep ALive: Not Supported 00:30:46.584 Namespace Granularity: Not Supported 00:30:46.584 SQ Associations: Not Supported 00:30:46.584 UUID List: Not Supported 00:30:46.584 Multi-Domain Subsystem: Not Supported 00:30:46.584 Fixed Capacity Management: Not Supported 00:30:46.584 Variable Capacity Management: Not Supported 00:30:46.584 Delete Endurance Group: Not Supported 00:30:46.584 Delete NVM Set: Not Supported 00:30:46.584 Extended LBA Formats Supported: Not Supported 00:30:46.584 Flexible Data Placement Supported: Not Supported 00:30:46.584 00:30:46.584 Controller Memory Buffer Support 00:30:46.584 ================================ 00:30:46.584 Supported: No 00:30:46.584 00:30:46.584 Persistent Memory Region Support 00:30:46.584 ================================ 00:30:46.584 Supported: No 00:30:46.584 00:30:46.584 Admin Command Set Attributes 00:30:46.584 ============================ 00:30:46.584 Security Send/Receive: Not Supported 00:30:46.584 Format NVM: Not Supported 00:30:46.584 Firmware Activate/Download: Not Supported 00:30:46.584 Namespace Management: Not Supported 00:30:46.584 Device Self-Test: Not Supported 00:30:46.584 Directives: Not Supported 00:30:46.584 NVMe-MI: Not Supported 00:30:46.585 Virtualization Management: Not Supported 00:30:46.585 Doorbell Buffer Config: Not Supported 00:30:46.585 Get LBA Status Capability: Not Supported 00:30:46.585 Command & Feature Lockdown Capability: Not Supported 00:30:46.585 Abort Command Limit: 1 00:30:46.585 Async Event Request Limit: 4 00:30:46.585 Number of Firmware Slots: N/A 00:30:46.585 Firmware Slot 1 Read-Only: N/A 00:30:46.585 Firmware Activation Without Reset: N/A 00:30:46.585 Multiple Update Detection Support: N/A 00:30:46.585 Firmware Update Granularity: No Information Provided 00:30:46.585 Per-Namespace SMART Log: No 00:30:46.585 Asymmetric Namespace Access Log Page: Not Supported 00:30:46.585 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:46.585 Command Effects Log Page: Not Supported 00:30:46.585 Get Log Page Extended Data: Supported 00:30:46.585 Telemetry Log Pages: Not Supported 00:30:46.585 Persistent Event Log Pages: Not Supported 00:30:46.585 Supported Log Pages Log Page: May Support 00:30:46.585 Commands Supported & Effects Log Page: Not Supported 00:30:46.585 Feature Identifiers & Effects Log Page:May Support 00:30:46.585 NVMe-MI Commands & Effects Log Page: May Support 00:30:46.585 Data Area 4 for Telemetry Log: Not Supported 00:30:46.585 Error Log Page Entries Supported: 128 00:30:46.585 Keep Alive: Not Supported 00:30:46.585 00:30:46.585 NVM Command Set Attributes 00:30:46.585 ========================== 00:30:46.585 Submission Queue Entry Size 00:30:46.585 Max: 1 00:30:46.585 Min: 1 00:30:46.585 Completion Queue Entry Size 00:30:46.585 Max: 1 00:30:46.585 Min: 1 00:30:46.585 Number of Namespaces: 0 00:30:46.585 Compare Command: Not Supported 00:30:46.585 Write Uncorrectable Command: Not Supported 00:30:46.585 Dataset Management Command: Not Supported 00:30:46.585 Write Zeroes Command: Not Supported 00:30:46.585 Set Features Save Field: Not Supported 00:30:46.585 Reservations: Not Supported 00:30:46.585 Timestamp: Not Supported 00:30:46.585 Copy: Not Supported 00:30:46.585 Volatile Write Cache: Not Present 00:30:46.585 Atomic Write Unit (Normal): 1 00:30:46.585 Atomic Write Unit (PFail): 1 00:30:46.585 Atomic Compare & Write Unit: 1 00:30:46.585 Fused Compare & Write: Supported 00:30:46.585 Scatter-Gather List 00:30:46.585 SGL Command Set: Supported 00:30:46.585 SGL Keyed: Supported 00:30:46.585 SGL Bit Bucket Descriptor: Not Supported 00:30:46.585 SGL Metadata Pointer: Not Supported 00:30:46.585 Oversized SGL: Not Supported 00:30:46.585 SGL Metadata Address: Not Supported 00:30:46.585 SGL Offset: Supported 00:30:46.585 Transport SGL Data Block: Not Supported 00:30:46.585 Replay Protected Memory Block: Not Supported 00:30:46.585 00:30:46.585 Firmware Slot Information 00:30:46.585 ========================= 00:30:46.585 Active slot: 0 00:30:46.585 00:30:46.585 00:30:46.585 Error Log 00:30:46.585 ========= 00:30:46.585 00:30:46.585 Active Namespaces 00:30:46.585 ================= 00:30:46.585 Discovery Log Page 00:30:46.585 ================== 00:30:46.585 Generation Counter: 2 00:30:46.585 Number of Records: 2 00:30:46.585 Record Format: 0 00:30:46.585 00:30:46.585 Discovery Log Entry 0 00:30:46.585 ---------------------- 00:30:46.585 Transport Type: 3 (TCP) 00:30:46.585 Address Family: 1 (IPv4) 00:30:46.585 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:46.585 Entry Flags: 00:30:46.585 Duplicate Returned Information: 1 00:30:46.585 Explicit Persistent Connection Support for Discovery: 1 00:30:46.585 Transport Requirements: 00:30:46.585 Secure Channel: Not Required 00:30:46.585 Port ID: 0 (0x0000) 00:30:46.585 Controller ID: 65535 (0xffff) 00:30:46.585 Admin Max SQ Size: 128 00:30:46.585 Transport Service Identifier: 4420 00:30:46.585 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:46.585 Transport Address: 10.0.0.2 00:30:46.585 Discovery Log Entry 1 00:30:46.585 ---------------------- 00:30:46.585 Transport Type: 3 (TCP) 00:30:46.585 Address Family: 1 (IPv4) 00:30:46.585 Subsystem Type: 2 (NVM Subsystem) 00:30:46.585 Entry Flags: 00:30:46.585 Duplicate Returned Information: 0 00:30:46.585 Explicit Persistent Connection Support for Discovery: 0 00:30:46.585 Transport Requirements: 00:30:46.585 Secure Channel: Not Required 00:30:46.585 Port ID: 0 (0x0000) 00:30:46.585 Controller ID: 65535 (0xffff) 00:30:46.585 Admin Max SQ Size: 128 00:30:46.585 Transport Service Identifier: 4420 00:30:46.585 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:30:46.585 Transport Address: 10.0.0.2 [2024-11-18 11:59:12.449768] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:30:46.585 [2024-11-18 11:59:12.449815] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:46.585 [2024-11-18 11:59:12.449838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.585 [2024-11-18 11:59:12.449868] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:30:46.585 [2024-11-18 11:59:12.449882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.585 [2024-11-18 11:59:12.449895] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:30:46.585 [2024-11-18 11:59:12.449908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.585 [2024-11-18 11:59:12.449920] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:46.585 [2024-11-18 11:59:12.449933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.585 [2024-11-18 11:59:12.449953] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.585 [2024-11-18 11:59:12.449967] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.585 [2024-11-18 11:59:12.449979] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:46.585 [2024-11-18 11:59:12.450008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.585 [2024-11-18 11:59:12.450044] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:46.585 [2024-11-18 11:59:12.450200] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.585 [2024-11-18 11:59:12.450223] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.585 [2024-11-18 11:59:12.450236] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.585 [2024-11-18 11:59:12.450248] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:46.585 [2024-11-18 11:59:12.450270] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.585 [2024-11-18 11:59:12.450284] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.585 [2024-11-18 11:59:12.450296] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:46.585 [2024-11-18 11:59:12.450323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.585 [2024-11-18 11:59:12.450379] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:46.585 [2024-11-18 11:59:12.450636] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.585 [2024-11-18 11:59:12.450658] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.585 [2024-11-18 11:59:12.450670] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.585 [2024-11-18 11:59:12.450682] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:46.585 [2024-11-18 11:59:12.450702] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:30:46.585 [2024-11-18 11:59:12.450718] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:30:46.585 [2024-11-18 11:59:12.450744] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.585 [2024-11-18 11:59:12.450760] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.585 [2024-11-18 11:59:12.450772] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:46.585 [2024-11-18 11:59:12.450792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.585 [2024-11-18 11:59:12.450839] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:46.585 [2024-11-18 11:59:12.451000] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.585 [2024-11-18 11:59:12.451022] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.585 [2024-11-18 11:59:12.451034] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.585 [2024-11-18 11:59:12.451045] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:46.585 [2024-11-18 11:59:12.451073] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.585 [2024-11-18 11:59:12.451089] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.585 [2024-11-18 11:59:12.451100] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:46.585 [2024-11-18 11:59:12.451119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.585 [2024-11-18 11:59:12.451150] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:46.585 [2024-11-18 11:59:12.451270] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.586 [2024-11-18 11:59:12.451292] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.586 [2024-11-18 11:59:12.451304] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.586 [2024-11-18 11:59:12.451315] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:46.586 [2024-11-18 11:59:12.451342] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.586 [2024-11-18 11:59:12.451357] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.586 [2024-11-18 11:59:12.451368] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:46.586 [2024-11-18 11:59:12.451387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.586 [2024-11-18 11:59:12.451417] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:46.586 [2024-11-18 11:59:12.451593] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.586 [2024-11-18 11:59:12.451614] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.586 [2024-11-18 11:59:12.451627] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.586 [2024-11-18 11:59:12.451638] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:46.586 [2024-11-18 11:59:12.451665] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.586 [2024-11-18 11:59:12.451680] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.586 [2024-11-18 11:59:12.451691] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:46.586 [2024-11-18 11:59:12.451710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.586 [2024-11-18 11:59:12.451741] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:46.586 [2024-11-18 11:59:12.451899] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.586 [2024-11-18 11:59:12.451920] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.586 [2024-11-18 11:59:12.451931] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.586 [2024-11-18 11:59:12.451942] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:46.586 [2024-11-18 11:59:12.451969] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.586 [2024-11-18 11:59:12.451985] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.586 [2024-11-18 11:59:12.451996] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:46.586 [2024-11-18 11:59:12.452014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.586 [2024-11-18 11:59:12.452045] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:46.586 [2024-11-18 11:59:12.452255] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.586 [2024-11-18 11:59:12.452277] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.586 [2024-11-18 11:59:12.452289] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.586 [2024-11-18 11:59:12.452300] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:46.586 [2024-11-18 11:59:12.452327] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.586 [2024-11-18 11:59:12.452342] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.586 [2024-11-18 11:59:12.452353] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:46.586 [2024-11-18 11:59:12.452372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.586 [2024-11-18 11:59:12.452402] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:46.586 [2024-11-18 11:59:12.452532] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.586 [2024-11-18 11:59:12.452554] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.586 [2024-11-18 11:59:12.452566] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.586 [2024-11-18 11:59:12.452577] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:46.586 [2024-11-18 11:59:12.452604] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.586 [2024-11-18 11:59:12.452620] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.586 [2024-11-18 11:59:12.452631] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:46.586 [2024-11-18 11:59:12.452654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.586 [2024-11-18 11:59:12.452687] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:46.586 [2024-11-18 11:59:12.452848] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.586 [2024-11-18 11:59:12.452870] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.586 [2024-11-18 11:59:12.452882] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.586 [2024-11-18 11:59:12.452893] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:46.586 [2024-11-18 11:59:12.452920] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.586 [2024-11-18 11:59:12.452936] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.586 [2024-11-18 11:59:12.452947] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:46.586 [2024-11-18 11:59:12.452966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.586 [2024-11-18 11:59:12.453012] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:46.586 [2024-11-18 11:59:12.453198] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.586 [2024-11-18 11:59:12.453218] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.586 [2024-11-18 11:59:12.453230] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.586 [2024-11-18 11:59:12.453241] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:46.586 [2024-11-18 11:59:12.453268] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.586 [2024-11-18 11:59:12.453284] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.586 [2024-11-18 11:59:12.453295] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:46.586 [2024-11-18 11:59:12.453313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.586 [2024-11-18 11:59:12.453344] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:46.586 [2024-11-18 11:59:12.453459] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.586 [2024-11-18 11:59:12.453488] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.586 [2024-11-18 11:59:12.453510] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.586 [2024-11-18 11:59:12.453534] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:46.586 [2024-11-18 11:59:12.453564] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.586 [2024-11-18 11:59:12.453580] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.586 [2024-11-18 11:59:12.453591] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:46.586 [2024-11-18 11:59:12.453610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.586 [2024-11-18 11:59:12.453641] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:46.586 [2024-11-18 11:59:12.453804] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.586 [2024-11-18 11:59:12.453824] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.586 [2024-11-18 11:59:12.453836] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.586 [2024-11-18 11:59:12.453847] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:46.586 [2024-11-18 11:59:12.453874] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.586 [2024-11-18 11:59:12.453890] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.586 [2024-11-18 11:59:12.453901] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:46.586 [2024-11-18 11:59:12.453919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.586 [2024-11-18 11:59:12.453964] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:46.586 [2024-11-18 11:59:12.454165] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.586 [2024-11-18 11:59:12.454186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.586 [2024-11-18 11:59:12.454198] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.586 [2024-11-18 11:59:12.454209] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:46.586 [2024-11-18 11:59:12.454235] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.586 [2024-11-18 11:59:12.454251] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.586 [2024-11-18 11:59:12.454262] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:46.586 [2024-11-18 11:59:12.454280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.587 [2024-11-18 11:59:12.454311] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:46.587 [2024-11-18 11:59:12.454445] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.587 [2024-11-18 11:59:12.454467] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.587 [2024-11-18 11:59:12.454484] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.587 [2024-11-18 11:59:12.454504] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:46.587 [2024-11-18 11:59:12.454533] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.587 [2024-11-18 11:59:12.454549] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.587 [2024-11-18 11:59:12.454559] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:46.587 [2024-11-18 11:59:12.454577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.587 [2024-11-18 11:59:12.454609] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:46.587 [2024-11-18 11:59:12.454761] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.587 [2024-11-18 11:59:12.454791] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.587 [2024-11-18 11:59:12.454804] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.587 [2024-11-18 11:59:12.454815] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:46.587 [2024-11-18 11:59:12.454842] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.587 [2024-11-18 11:59:12.454858] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.587 [2024-11-18 11:59:12.454869] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:46.587 [2024-11-18 11:59:12.454887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.587 [2024-11-18 11:59:12.454918] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:46.587 [2024-11-18 11:59:12.455066] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.587 [2024-11-18 11:59:12.455087] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.587 [2024-11-18 11:59:12.455099] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.587 [2024-11-18 11:59:12.455110] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:46.587 [2024-11-18 11:59:12.455137] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.587 [2024-11-18 11:59:12.455152] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.587 [2024-11-18 11:59:12.455163] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:46.587 [2024-11-18 11:59:12.455187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.587 [2024-11-18 11:59:12.455219] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:46.587 [2024-11-18 11:59:12.455359] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.587 [2024-11-18 11:59:12.455380] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.587 [2024-11-18 11:59:12.455392] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.587 [2024-11-18 11:59:12.455403] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:46.587 [2024-11-18 11:59:12.455430] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.587 [2024-11-18 11:59:12.455445] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.587 [2024-11-18 11:59:12.455456] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:46.587 [2024-11-18 11:59:12.455474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.587 [2024-11-18 11:59:12.455513] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:46.587 [2024-11-18 11:59:12.455662] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.587 [2024-11-18 11:59:12.455684] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.587 [2024-11-18 11:59:12.455695] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.587 [2024-11-18 11:59:12.455706] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:46.587 [2024-11-18 11:59:12.455733] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.587 [2024-11-18 11:59:12.455749] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.587 [2024-11-18 11:59:12.455760] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:46.587 [2024-11-18 11:59:12.455778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.587 [2024-11-18 11:59:12.455809] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:46.587 [2024-11-18 11:59:12.455929] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.587 [2024-11-18 11:59:12.455954] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.587 [2024-11-18 11:59:12.455967] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.587 [2024-11-18 11:59:12.455979] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:46.587 [2024-11-18 11:59:12.456006] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.587 [2024-11-18 11:59:12.456021] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.587 [2024-11-18 11:59:12.456033] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:46.587 [2024-11-18 11:59:12.456051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.587 [2024-11-18 11:59:12.456082] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:46.587 [2024-11-18 11:59:12.456230] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.587 [2024-11-18 11:59:12.456251] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.587 [2024-11-18 11:59:12.456263] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.587 [2024-11-18 11:59:12.456274] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:46.587 [2024-11-18 11:59:12.456300] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.587 [2024-11-18 11:59:12.456316] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.587 [2024-11-18 11:59:12.456327] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:46.587 [2024-11-18 11:59:12.456345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.587 [2024-11-18 11:59:12.456375] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:46.587 [2024-11-18 11:59:12.460506] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.587 [2024-11-18 11:59:12.460532] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.587 [2024-11-18 11:59:12.460558] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.587 [2024-11-18 11:59:12.460569] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:46.587 [2024-11-18 11:59:12.460609] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.587 [2024-11-18 11:59:12.460626] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.587 [2024-11-18 11:59:12.460637] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:46.587 [2024-11-18 11:59:12.460655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.587 [2024-11-18 11:59:12.460702] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:46.587 [2024-11-18 11:59:12.460849] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.587 [2024-11-18 11:59:12.460871] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.587 [2024-11-18 11:59:12.460882] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.587 [2024-11-18 11:59:12.460894] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:46.587 [2024-11-18 11:59:12.460917] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 10 milliseconds 00:30:46.848 00:30:46.848 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:30:46.848 [2024-11-18 11:59:12.566226] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:30:46.848 [2024-11-18 11:59:12.566321] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3072470 ] 00:30:46.849 [2024-11-18 11:59:12.643038] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:30:46.849 [2024-11-18 11:59:12.643157] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:46.849 [2024-11-18 11:59:12.643178] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:46.849 [2024-11-18 11:59:12.643216] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:46.849 [2024-11-18 11:59:12.643239] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:46.849 [2024-11-18 11:59:12.647068] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:30:46.849 [2024-11-18 11:59:12.647152] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:30:46.849 [2024-11-18 11:59:12.654514] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:46.849 [2024-11-18 11:59:12.654551] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:46.849 [2024-11-18 11:59:12.654568] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:46.849 [2024-11-18 11:59:12.654579] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:46.849 [2024-11-18 11:59:12.654648] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.849 [2024-11-18 11:59:12.654669] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.849 [2024-11-18 11:59:12.654688] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:46.849 [2024-11-18 11:59:12.654719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:46.849 [2024-11-18 11:59:12.654759] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:46.849 [2024-11-18 11:59:12.662515] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.849 [2024-11-18 11:59:12.662543] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.849 [2024-11-18 11:59:12.662557] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.849 [2024-11-18 11:59:12.662570] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:46.849 [2024-11-18 11:59:12.662596] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:46.849 [2024-11-18 11:59:12.662619] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:30:46.849 [2024-11-18 11:59:12.662635] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:30:46.849 [2024-11-18 11:59:12.662671] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.849 [2024-11-18 11:59:12.662687] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.849 [2024-11-18 11:59:12.662705] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:46.849 [2024-11-18 11:59:12.662727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.849 [2024-11-18 11:59:12.662763] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:46.849 [2024-11-18 11:59:12.662933] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.849 [2024-11-18 11:59:12.662960] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.849 [2024-11-18 11:59:12.662975] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.849 [2024-11-18 11:59:12.662987] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:46.849 [2024-11-18 11:59:12.663017] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:30:46.849 [2024-11-18 11:59:12.663042] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:30:46.849 [2024-11-18 11:59:12.663064] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.849 [2024-11-18 11:59:12.663093] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.849 [2024-11-18 11:59:12.663105] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:46.849 [2024-11-18 11:59:12.663130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.849 [2024-11-18 11:59:12.663181] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:46.849 [2024-11-18 11:59:12.663323] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.849 [2024-11-18 11:59:12.663344] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.849 [2024-11-18 11:59:12.663360] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.849 [2024-11-18 11:59:12.663373] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:46.849 [2024-11-18 11:59:12.663389] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:30:46.849 [2024-11-18 11:59:12.663413] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:46.849 [2024-11-18 11:59:12.663434] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.849 [2024-11-18 11:59:12.663454] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.849 [2024-11-18 11:59:12.663466] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:46.849 [2024-11-18 11:59:12.663486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.849 [2024-11-18 11:59:12.663529] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:46.849 [2024-11-18 11:59:12.663672] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.849 [2024-11-18 11:59:12.663693] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.849 [2024-11-18 11:59:12.663705] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.849 [2024-11-18 11:59:12.663716] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:46.849 [2024-11-18 11:59:12.663732] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:46.849 [2024-11-18 11:59:12.663765] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.849 [2024-11-18 11:59:12.663782] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.849 [2024-11-18 11:59:12.663794] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:46.849 [2024-11-18 11:59:12.663814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.849 [2024-11-18 11:59:12.663861] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:46.849 [2024-11-18 11:59:12.664036] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.849 [2024-11-18 11:59:12.664059] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.849 [2024-11-18 11:59:12.664071] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.849 [2024-11-18 11:59:12.664082] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:46.849 [2024-11-18 11:59:12.664097] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:46.849 [2024-11-18 11:59:12.664119] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:46.849 [2024-11-18 11:59:12.664146] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:46.849 [2024-11-18 11:59:12.664265] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:30:46.849 [2024-11-18 11:59:12.664279] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:46.849 [2024-11-18 11:59:12.664306] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.849 [2024-11-18 11:59:12.664335] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.849 [2024-11-18 11:59:12.664346] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:46.849 [2024-11-18 11:59:12.664365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.849 [2024-11-18 11:59:12.664397] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:46.849 [2024-11-18 11:59:12.664562] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.849 [2024-11-18 11:59:12.664584] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.849 [2024-11-18 11:59:12.664596] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.849 [2024-11-18 11:59:12.664608] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:46.849 [2024-11-18 11:59:12.664623] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:46.849 [2024-11-18 11:59:12.664657] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.849 [2024-11-18 11:59:12.664674] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.849 [2024-11-18 11:59:12.664686] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:46.850 [2024-11-18 11:59:12.664706] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.850 [2024-11-18 11:59:12.664743] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:46.850 [2024-11-18 11:59:12.664877] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.850 [2024-11-18 11:59:12.664898] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.850 [2024-11-18 11:59:12.664910] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.850 [2024-11-18 11:59:12.664925] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:46.850 [2024-11-18 11:59:12.664941] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:46.850 [2024-11-18 11:59:12.664956] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:46.850 [2024-11-18 11:59:12.664978] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:30:46.850 [2024-11-18 11:59:12.664999] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:46.850 [2024-11-18 11:59:12.665030] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.850 [2024-11-18 11:59:12.665064] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:46.850 [2024-11-18 11:59:12.665085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.850 [2024-11-18 11:59:12.665117] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:46.850 [2024-11-18 11:59:12.665336] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:46.850 [2024-11-18 11:59:12.665367] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:46.850 [2024-11-18 11:59:12.665381] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:46.850 [2024-11-18 11:59:12.665393] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:30:46.850 [2024-11-18 11:59:12.665407] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:46.850 [2024-11-18 11:59:12.665420] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.850 [2024-11-18 11:59:12.665450] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:46.850 [2024-11-18 11:59:12.665467] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:46.850 [2024-11-18 11:59:12.705673] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.850 [2024-11-18 11:59:12.705702] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.850 [2024-11-18 11:59:12.705715] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.850 [2024-11-18 11:59:12.705727] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:46.850 [2024-11-18 11:59:12.705752] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:30:46.850 [2024-11-18 11:59:12.705781] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:30:46.850 [2024-11-18 11:59:12.705799] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:30:46.850 [2024-11-18 11:59:12.705817] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:30:46.850 [2024-11-18 11:59:12.705831] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:30:46.850 [2024-11-18 11:59:12.705845] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:30:46.850 [2024-11-18 11:59:12.705879] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:46.850 [2024-11-18 11:59:12.705902] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.850 [2024-11-18 11:59:12.705917] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.850 [2024-11-18 11:59:12.705930] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:46.850 [2024-11-18 11:59:12.705979] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:46.850 [2024-11-18 11:59:12.706029] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:46.850 [2024-11-18 11:59:12.706220] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.850 [2024-11-18 11:59:12.706241] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.850 [2024-11-18 11:59:12.706254] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.850 [2024-11-18 11:59:12.706266] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:46.850 [2024-11-18 11:59:12.706286] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.850 [2024-11-18 11:59:12.706308] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.850 [2024-11-18 11:59:12.706320] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:46.850 [2024-11-18 11:59:12.706340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.850 [2024-11-18 11:59:12.706358] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.850 [2024-11-18 11:59:12.706370] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.850 [2024-11-18 11:59:12.706381] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:30:46.850 [2024-11-18 11:59:12.706419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.850 [2024-11-18 11:59:12.706438] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.850 [2024-11-18 11:59:12.706450] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.850 [2024-11-18 11:59:12.706460] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:30:46.850 [2024-11-18 11:59:12.706477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.850 [2024-11-18 11:59:12.710520] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.850 [2024-11-18 11:59:12.710542] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.850 [2024-11-18 11:59:12.710554] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:46.850 [2024-11-18 11:59:12.710572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.850 [2024-11-18 11:59:12.710587] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:46.850 [2024-11-18 11:59:12.710616] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:46.850 [2024-11-18 11:59:12.710638] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.850 [2024-11-18 11:59:12.710652] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:46.850 [2024-11-18 11:59:12.710671] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.850 [2024-11-18 11:59:12.710720] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:46.850 [2024-11-18 11:59:12.710741] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:30:46.850 [2024-11-18 11:59:12.710754] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:30:46.850 [2024-11-18 11:59:12.710767] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:46.850 [2024-11-18 11:59:12.710779] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:46.850 [2024-11-18 11:59:12.710946] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.850 [2024-11-18 11:59:12.710968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.850 [2024-11-18 11:59:12.710980] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.850 [2024-11-18 11:59:12.710992] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:46.850 [2024-11-18 11:59:12.711008] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:30:46.850 [2024-11-18 11:59:12.711048] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:30:46.850 [2024-11-18 11:59:12.711071] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:30:46.850 [2024-11-18 11:59:12.711089] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:30:46.850 [2024-11-18 11:59:12.711106] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.850 [2024-11-18 11:59:12.711120] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.850 [2024-11-18 11:59:12.711132] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:46.850 [2024-11-18 11:59:12.711152] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:46.850 [2024-11-18 11:59:12.711188] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:46.850 [2024-11-18 11:59:12.711335] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.850 [2024-11-18 11:59:12.711356] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.850 [2024-11-18 11:59:12.711368] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.850 [2024-11-18 11:59:12.711380] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:46.851 [2024-11-18 11:59:12.711470] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:30:46.851 [2024-11-18 11:59:12.711520] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:30:46.851 [2024-11-18 11:59:12.711550] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.851 [2024-11-18 11:59:12.711573] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:46.851 [2024-11-18 11:59:12.711594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.851 [2024-11-18 11:59:12.711627] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:46.851 [2024-11-18 11:59:12.711801] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:46.851 [2024-11-18 11:59:12.711824] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:46.851 [2024-11-18 11:59:12.711836] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:46.851 [2024-11-18 11:59:12.711847] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:46.851 [2024-11-18 11:59:12.711860] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:46.851 [2024-11-18 11:59:12.711871] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.851 [2024-11-18 11:59:12.711895] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:46.851 [2024-11-18 11:59:12.711910] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:46.851 [2024-11-18 11:59:12.711929] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.851 [2024-11-18 11:59:12.711946] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.851 [2024-11-18 11:59:12.711958] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.851 [2024-11-18 11:59:12.711969] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:46.851 [2024-11-18 11:59:12.712015] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:30:46.851 [2024-11-18 11:59:12.712047] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:30:46.851 [2024-11-18 11:59:12.712084] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:30:46.851 [2024-11-18 11:59:12.712127] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.851 [2024-11-18 11:59:12.712142] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:46.851 [2024-11-18 11:59:12.712177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.851 [2024-11-18 11:59:12.712208] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:46.851 [2024-11-18 11:59:12.712436] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:46.851 [2024-11-18 11:59:12.712459] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:46.851 [2024-11-18 11:59:12.712471] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:46.851 [2024-11-18 11:59:12.712482] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:46.851 [2024-11-18 11:59:12.712509] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:46.851 [2024-11-18 11:59:12.712523] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.851 [2024-11-18 11:59:12.712542] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:46.851 [2024-11-18 11:59:12.712555] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:46.851 [2024-11-18 11:59:12.712574] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.851 [2024-11-18 11:59:12.712591] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.851 [2024-11-18 11:59:12.712602] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.851 [2024-11-18 11:59:12.712613] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:46.851 [2024-11-18 11:59:12.712654] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:30:46.851 [2024-11-18 11:59:12.712699] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:30:46.851 [2024-11-18 11:59:12.712727] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.851 [2024-11-18 11:59:12.712742] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:46.851 [2024-11-18 11:59:12.712777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.851 [2024-11-18 11:59:12.712810] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:46.851 [2024-11-18 11:59:12.713049] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:46.851 [2024-11-18 11:59:12.713072] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:46.851 [2024-11-18 11:59:12.713096] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:46.851 [2024-11-18 11:59:12.713108] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:46.851 [2024-11-18 11:59:12.713120] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:46.851 [2024-11-18 11:59:12.713132] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.851 [2024-11-18 11:59:12.713151] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:46.851 [2024-11-18 11:59:12.713164] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:46.851 [2024-11-18 11:59:12.713182] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.851 [2024-11-18 11:59:12.713199] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.851 [2024-11-18 11:59:12.713210] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.851 [2024-11-18 11:59:12.713221] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:46.851 [2024-11-18 11:59:12.713248] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:30:46.851 [2024-11-18 11:59:12.713274] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:30:46.851 [2024-11-18 11:59:12.713297] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:30:46.851 [2024-11-18 11:59:12.713316] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:30:46.851 [2024-11-18 11:59:12.713331] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:30:46.851 [2024-11-18 11:59:12.713360] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:30:46.851 [2024-11-18 11:59:12.713378] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:30:46.851 [2024-11-18 11:59:12.713396] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:30:46.851 [2024-11-18 11:59:12.713426] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:30:46.851 [2024-11-18 11:59:12.713486] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.851 [2024-11-18 11:59:12.713512] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:46.851 [2024-11-18 11:59:12.713533] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.851 [2024-11-18 11:59:12.713558] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.851 [2024-11-18 11:59:12.713572] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.851 [2024-11-18 11:59:12.713584] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:46.851 [2024-11-18 11:59:12.713601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.851 [2024-11-18 11:59:12.713634] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:46.851 [2024-11-18 11:59:12.713668] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:46.851 [2024-11-18 11:59:12.713830] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.851 [2024-11-18 11:59:12.713856] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.851 [2024-11-18 11:59:12.713868] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.851 [2024-11-18 11:59:12.713880] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:46.851 [2024-11-18 11:59:12.713900] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.851 [2024-11-18 11:59:12.713916] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.851 [2024-11-18 11:59:12.713928] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.851 [2024-11-18 11:59:12.713939] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:46.851 [2024-11-18 11:59:12.713979] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.851 [2024-11-18 11:59:12.713995] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:46.851 [2024-11-18 11:59:12.714013] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.851 [2024-11-18 11:59:12.714043] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:46.851 [2024-11-18 11:59:12.714192] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.852 [2024-11-18 11:59:12.714215] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.852 [2024-11-18 11:59:12.714227] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.852 [2024-11-18 11:59:12.714238] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:46.852 [2024-11-18 11:59:12.714264] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.852 [2024-11-18 11:59:12.714280] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:46.852 [2024-11-18 11:59:12.714304] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.852 [2024-11-18 11:59:12.714337] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:46.852 [2024-11-18 11:59:12.714448] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.852 [2024-11-18 11:59:12.714469] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.852 [2024-11-18 11:59:12.714485] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.852 [2024-11-18 11:59:12.718530] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:46.852 [2024-11-18 11:59:12.718561] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.852 [2024-11-18 11:59:12.718577] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:46.852 [2024-11-18 11:59:12.718596] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.852 [2024-11-18 11:59:12.718628] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:46.852 [2024-11-18 11:59:12.718776] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.852 [2024-11-18 11:59:12.718797] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.852 [2024-11-18 11:59:12.718809] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.852 [2024-11-18 11:59:12.718820] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:46.852 [2024-11-18 11:59:12.718863] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.852 [2024-11-18 11:59:12.718882] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:46.852 [2024-11-18 11:59:12.718903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.852 [2024-11-18 11:59:12.718927] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.852 [2024-11-18 11:59:12.718942] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:46.852 [2024-11-18 11:59:12.718976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.852 [2024-11-18 11:59:12.718999] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.852 [2024-11-18 11:59:12.719013] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x615000015700) 00:30:46.852 [2024-11-18 11:59:12.719038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.852 [2024-11-18 11:59:12.719081] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.852 [2024-11-18 11:59:12.719097] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:30:46.852 [2024-11-18 11:59:12.719115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.852 [2024-11-18 11:59:12.719147] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:46.852 [2024-11-18 11:59:12.719180] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:46.852 [2024-11-18 11:59:12.719193] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:30:46.852 [2024-11-18 11:59:12.719205] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:30:46.852 [2024-11-18 11:59:12.719541] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:46.852 [2024-11-18 11:59:12.719564] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:46.852 [2024-11-18 11:59:12.719576] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:46.852 [2024-11-18 11:59:12.719588] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8192, cccid=5 00:30:46.852 [2024-11-18 11:59:12.719602] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x615000015700): expected_datao=0, payload_size=8192 00:30:46.852 [2024-11-18 11:59:12.719614] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.852 [2024-11-18 11:59:12.719658] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:46.852 [2024-11-18 11:59:12.719676] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:46.852 [2024-11-18 11:59:12.719696] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:46.852 [2024-11-18 11:59:12.719714] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:46.852 [2024-11-18 11:59:12.719725] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:46.852 [2024-11-18 11:59:12.719736] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=4 00:30:46.852 [2024-11-18 11:59:12.719748] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:30:46.852 [2024-11-18 11:59:12.719760] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.852 [2024-11-18 11:59:12.719786] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:46.852 [2024-11-18 11:59:12.719800] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:46.852 [2024-11-18 11:59:12.719814] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:46.852 [2024-11-18 11:59:12.719829] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:46.852 [2024-11-18 11:59:12.719840] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:46.852 [2024-11-18 11:59:12.719851] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=6 00:30:46.852 [2024-11-18 11:59:12.719878] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:30:46.852 [2024-11-18 11:59:12.719889] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.852 [2024-11-18 11:59:12.719905] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:46.852 [2024-11-18 11:59:12.719917] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:46.852 [2024-11-18 11:59:12.719946] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:46.852 [2024-11-18 11:59:12.719960] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:46.852 [2024-11-18 11:59:12.719971] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:46.852 [2024-11-18 11:59:12.719980] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=7 00:30:46.852 [2024-11-18 11:59:12.719992] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:46.852 [2024-11-18 11:59:12.720002] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.852 [2024-11-18 11:59:12.720018] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:46.852 [2024-11-18 11:59:12.720030] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:46.852 [2024-11-18 11:59:12.720043] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.852 [2024-11-18 11:59:12.720057] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.852 [2024-11-18 11:59:12.720067] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.852 [2024-11-18 11:59:12.720078] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:46.852 [2024-11-18 11:59:12.720112] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.852 [2024-11-18 11:59:12.720131] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.852 [2024-11-18 11:59:12.720141] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.852 [2024-11-18 11:59:12.720151] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:46.852 [2024-11-18 11:59:12.720178] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.852 [2024-11-18 11:59:12.720195] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.852 [2024-11-18 11:59:12.720206] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.852 [2024-11-18 11:59:12.720216] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x615000015700 00:30:46.852 [2024-11-18 11:59:12.720239] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.852 [2024-11-18 11:59:12.720257] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.852 [2024-11-18 11:59:12.720267] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.852 [2024-11-18 11:59:12.720278] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:30:46.852 ===================================================== 00:30:46.852 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:46.852 ===================================================== 00:30:46.852 Controller Capabilities/Features 00:30:46.852 ================================ 00:30:46.852 Vendor ID: 8086 00:30:46.852 Subsystem Vendor ID: 8086 00:30:46.852 Serial Number: SPDK00000000000001 00:30:46.852 Model Number: SPDK bdev Controller 00:30:46.852 Firmware Version: 25.01 00:30:46.852 Recommended Arb Burst: 6 00:30:46.852 IEEE OUI Identifier: e4 d2 5c 00:30:46.853 Multi-path I/O 00:30:46.853 May have multiple subsystem ports: Yes 00:30:46.853 May have multiple controllers: Yes 00:30:46.853 Associated with SR-IOV VF: No 00:30:46.853 Max Data Transfer Size: 131072 00:30:46.853 Max Number of Namespaces: 32 00:30:46.853 Max Number of I/O Queues: 127 00:30:46.853 NVMe Specification Version (VS): 1.3 00:30:46.853 NVMe Specification Version (Identify): 1.3 00:30:46.853 Maximum Queue Entries: 128 00:30:46.853 Contiguous Queues Required: Yes 00:30:46.853 Arbitration Mechanisms Supported 00:30:46.853 Weighted Round Robin: Not Supported 00:30:46.853 Vendor Specific: Not Supported 00:30:46.853 Reset Timeout: 15000 ms 00:30:46.853 Doorbell Stride: 4 bytes 00:30:46.853 NVM Subsystem Reset: Not Supported 00:30:46.853 Command Sets Supported 00:30:46.853 NVM Command Set: Supported 00:30:46.853 Boot Partition: Not Supported 00:30:46.853 Memory Page Size Minimum: 4096 bytes 00:30:46.853 Memory Page Size Maximum: 4096 bytes 00:30:46.853 Persistent Memory Region: Not Supported 00:30:46.853 Optional Asynchronous Events Supported 00:30:46.853 Namespace Attribute Notices: Supported 00:30:46.853 Firmware Activation Notices: Not Supported 00:30:46.853 ANA Change Notices: Not Supported 00:30:46.853 PLE Aggregate Log Change Notices: Not Supported 00:30:46.853 LBA Status Info Alert Notices: Not Supported 00:30:46.853 EGE Aggregate Log Change Notices: Not Supported 00:30:46.853 Normal NVM Subsystem Shutdown event: Not Supported 00:30:46.853 Zone Descriptor Change Notices: Not Supported 00:30:46.853 Discovery Log Change Notices: Not Supported 00:30:46.853 Controller Attributes 00:30:46.853 128-bit Host Identifier: Supported 00:30:46.853 Non-Operational Permissive Mode: Not Supported 00:30:46.853 NVM Sets: Not Supported 00:30:46.853 Read Recovery Levels: Not Supported 00:30:46.853 Endurance Groups: Not Supported 00:30:46.853 Predictable Latency Mode: Not Supported 00:30:46.853 Traffic Based Keep ALive: Not Supported 00:30:46.853 Namespace Granularity: Not Supported 00:30:46.853 SQ Associations: Not Supported 00:30:46.853 UUID List: Not Supported 00:30:46.853 Multi-Domain Subsystem: Not Supported 00:30:46.853 Fixed Capacity Management: Not Supported 00:30:46.853 Variable Capacity Management: Not Supported 00:30:46.853 Delete Endurance Group: Not Supported 00:30:46.853 Delete NVM Set: Not Supported 00:30:46.853 Extended LBA Formats Supported: Not Supported 00:30:46.853 Flexible Data Placement Supported: Not Supported 00:30:46.853 00:30:46.853 Controller Memory Buffer Support 00:30:46.853 ================================ 00:30:46.853 Supported: No 00:30:46.853 00:30:46.853 Persistent Memory Region Support 00:30:46.853 ================================ 00:30:46.853 Supported: No 00:30:46.853 00:30:46.853 Admin Command Set Attributes 00:30:46.853 ============================ 00:30:46.853 Security Send/Receive: Not Supported 00:30:46.853 Format NVM: Not Supported 00:30:46.853 Firmware Activate/Download: Not Supported 00:30:46.853 Namespace Management: Not Supported 00:30:46.853 Device Self-Test: Not Supported 00:30:46.853 Directives: Not Supported 00:30:46.853 NVMe-MI: Not Supported 00:30:46.853 Virtualization Management: Not Supported 00:30:46.853 Doorbell Buffer Config: Not Supported 00:30:46.853 Get LBA Status Capability: Not Supported 00:30:46.853 Command & Feature Lockdown Capability: Not Supported 00:30:46.853 Abort Command Limit: 4 00:30:46.853 Async Event Request Limit: 4 00:30:46.853 Number of Firmware Slots: N/A 00:30:46.853 Firmware Slot 1 Read-Only: N/A 00:30:46.853 Firmware Activation Without Reset: N/A 00:30:46.853 Multiple Update Detection Support: N/A 00:30:46.853 Firmware Update Granularity: No Information Provided 00:30:46.853 Per-Namespace SMART Log: No 00:30:46.853 Asymmetric Namespace Access Log Page: Not Supported 00:30:46.853 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:30:46.853 Command Effects Log Page: Supported 00:30:46.853 Get Log Page Extended Data: Supported 00:30:46.853 Telemetry Log Pages: Not Supported 00:30:46.853 Persistent Event Log Pages: Not Supported 00:30:46.853 Supported Log Pages Log Page: May Support 00:30:46.853 Commands Supported & Effects Log Page: Not Supported 00:30:46.853 Feature Identifiers & Effects Log Page:May Support 00:30:46.853 NVMe-MI Commands & Effects Log Page: May Support 00:30:46.853 Data Area 4 for Telemetry Log: Not Supported 00:30:46.853 Error Log Page Entries Supported: 128 00:30:46.853 Keep Alive: Supported 00:30:46.853 Keep Alive Granularity: 10000 ms 00:30:46.853 00:30:46.853 NVM Command Set Attributes 00:30:46.853 ========================== 00:30:46.853 Submission Queue Entry Size 00:30:46.853 Max: 64 00:30:46.853 Min: 64 00:30:46.853 Completion Queue Entry Size 00:30:46.853 Max: 16 00:30:46.853 Min: 16 00:30:46.853 Number of Namespaces: 32 00:30:46.853 Compare Command: Supported 00:30:46.853 Write Uncorrectable Command: Not Supported 00:30:46.853 Dataset Management Command: Supported 00:30:46.853 Write Zeroes Command: Supported 00:30:46.853 Set Features Save Field: Not Supported 00:30:46.853 Reservations: Supported 00:30:46.853 Timestamp: Not Supported 00:30:46.853 Copy: Supported 00:30:46.853 Volatile Write Cache: Present 00:30:46.853 Atomic Write Unit (Normal): 1 00:30:46.853 Atomic Write Unit (PFail): 1 00:30:46.853 Atomic Compare & Write Unit: 1 00:30:46.853 Fused Compare & Write: Supported 00:30:46.853 Scatter-Gather List 00:30:46.853 SGL Command Set: Supported 00:30:46.853 SGL Keyed: Supported 00:30:46.853 SGL Bit Bucket Descriptor: Not Supported 00:30:46.853 SGL Metadata Pointer: Not Supported 00:30:46.853 Oversized SGL: Not Supported 00:30:46.853 SGL Metadata Address: Not Supported 00:30:46.853 SGL Offset: Supported 00:30:46.853 Transport SGL Data Block: Not Supported 00:30:46.853 Replay Protected Memory Block: Not Supported 00:30:46.853 00:30:46.853 Firmware Slot Information 00:30:46.853 ========================= 00:30:46.853 Active slot: 1 00:30:46.853 Slot 1 Firmware Revision: 25.01 00:30:46.853 00:30:46.853 00:30:46.853 Commands Supported and Effects 00:30:46.853 ============================== 00:30:46.853 Admin Commands 00:30:46.853 -------------- 00:30:46.853 Get Log Page (02h): Supported 00:30:46.853 Identify (06h): Supported 00:30:46.853 Abort (08h): Supported 00:30:46.853 Set Features (09h): Supported 00:30:46.853 Get Features (0Ah): Supported 00:30:46.853 Asynchronous Event Request (0Ch): Supported 00:30:46.853 Keep Alive (18h): Supported 00:30:46.853 I/O Commands 00:30:46.853 ------------ 00:30:46.853 Flush (00h): Supported LBA-Change 00:30:46.853 Write (01h): Supported LBA-Change 00:30:46.853 Read (02h): Supported 00:30:46.853 Compare (05h): Supported 00:30:46.853 Write Zeroes (08h): Supported LBA-Change 00:30:46.853 Dataset Management (09h): Supported LBA-Change 00:30:46.853 Copy (19h): Supported LBA-Change 00:30:46.853 00:30:46.853 Error Log 00:30:46.853 ========= 00:30:46.853 00:30:46.853 Arbitration 00:30:46.853 =========== 00:30:46.853 Arbitration Burst: 1 00:30:46.853 00:30:46.853 Power Management 00:30:46.853 ================ 00:30:46.853 Number of Power States: 1 00:30:46.853 Current Power State: Power State #0 00:30:46.853 Power State #0: 00:30:46.853 Max Power: 0.00 W 00:30:46.854 Non-Operational State: Operational 00:30:46.854 Entry Latency: Not Reported 00:30:46.854 Exit Latency: Not Reported 00:30:46.854 Relative Read Throughput: 0 00:30:46.854 Relative Read Latency: 0 00:30:46.854 Relative Write Throughput: 0 00:30:46.854 Relative Write Latency: 0 00:30:46.854 Idle Power: Not Reported 00:30:46.854 Active Power: Not Reported 00:30:46.854 Non-Operational Permissive Mode: Not Supported 00:30:46.854 00:30:46.854 Health Information 00:30:46.854 ================== 00:30:46.854 Critical Warnings: 00:30:46.854 Available Spare Space: OK 00:30:46.854 Temperature: OK 00:30:46.854 Device Reliability: OK 00:30:46.854 Read Only: No 00:30:46.854 Volatile Memory Backup: OK 00:30:46.854 Current Temperature: 0 Kelvin (-273 Celsius) 00:30:46.854 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:30:46.854 Available Spare: 0% 00:30:46.854 Available Spare Threshold: 0% 00:30:46.854 Life Percentage Used:[2024-11-18 11:59:12.720466] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.854 [2024-11-18 11:59:12.720508] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:30:46.854 [2024-11-18 11:59:12.720530] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.854 [2024-11-18 11:59:12.720580] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:30:46.854 [2024-11-18 11:59:12.720735] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.854 [2024-11-18 11:59:12.720758] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.854 [2024-11-18 11:59:12.720771] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.854 [2024-11-18 11:59:12.720790] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:30:46.854 [2024-11-18 11:59:12.720864] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:30:46.854 [2024-11-18 11:59:12.720909] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:46.854 [2024-11-18 11:59:12.720931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.854 [2024-11-18 11:59:12.720946] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:30:46.854 [2024-11-18 11:59:12.720961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.854 [2024-11-18 11:59:12.720973] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:30:46.854 [2024-11-18 11:59:12.720987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.854 [2024-11-18 11:59:12.721000] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:46.854 [2024-11-18 11:59:12.721029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.854 [2024-11-18 11:59:12.721049] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.854 [2024-11-18 11:59:12.721063] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.854 [2024-11-18 11:59:12.721074] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:46.854 [2024-11-18 11:59:12.721093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.854 [2024-11-18 11:59:12.721126] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:46.854 [2024-11-18 11:59:12.721271] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.854 [2024-11-18 11:59:12.721300] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.854 [2024-11-18 11:59:12.721313] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.854 [2024-11-18 11:59:12.721325] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:46.854 [2024-11-18 11:59:12.721347] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.854 [2024-11-18 11:59:12.721362] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.854 [2024-11-18 11:59:12.721373] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:46.854 [2024-11-18 11:59:12.721393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.854 [2024-11-18 11:59:12.721439] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:46.854 [2024-11-18 11:59:12.721681] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.854 [2024-11-18 11:59:12.721704] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.854 [2024-11-18 11:59:12.721715] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.854 [2024-11-18 11:59:12.721727] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:46.854 [2024-11-18 11:59:12.721742] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:30:46.854 [2024-11-18 11:59:12.721756] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:30:46.854 [2024-11-18 11:59:12.721789] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.854 [2024-11-18 11:59:12.721806] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.854 [2024-11-18 11:59:12.721832] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:46.854 [2024-11-18 11:59:12.721853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.854 [2024-11-18 11:59:12.721885] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:46.854 [2024-11-18 11:59:12.722020] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.854 [2024-11-18 11:59:12.722041] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.854 [2024-11-18 11:59:12.722053] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.854 [2024-11-18 11:59:12.722065] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:46.854 [2024-11-18 11:59:12.722093] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.854 [2024-11-18 11:59:12.722109] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.854 [2024-11-18 11:59:12.722120] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:46.854 [2024-11-18 11:59:12.722139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.854 [2024-11-18 11:59:12.722170] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:46.854 [2024-11-18 11:59:12.722311] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.854 [2024-11-18 11:59:12.722332] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.855 [2024-11-18 11:59:12.722344] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.855 [2024-11-18 11:59:12.722355] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:46.855 [2024-11-18 11:59:12.722382] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.855 [2024-11-18 11:59:12.722398] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.855 [2024-11-18 11:59:12.722409] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:46.855 [2024-11-18 11:59:12.722433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.855 [2024-11-18 11:59:12.722466] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:46.855 [2024-11-18 11:59:12.726519] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.855 [2024-11-18 11:59:12.726543] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.855 [2024-11-18 11:59:12.726556] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.855 [2024-11-18 11:59:12.726567] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:46.855 [2024-11-18 11:59:12.726595] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.855 [2024-11-18 11:59:12.726610] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.855 [2024-11-18 11:59:12.726625] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:46.855 [2024-11-18 11:59:12.726645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.855 [2024-11-18 11:59:12.726677] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:46.855 [2024-11-18 11:59:12.726835] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.855 [2024-11-18 11:59:12.726856] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.855 [2024-11-18 11:59:12.726868] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.855 [2024-11-18 11:59:12.726879] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:46.855 [2024-11-18 11:59:12.726901] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:30:47.113 0% 00:30:47.113 Data Units Read: 0 00:30:47.113 Data Units Written: 0 00:30:47.113 Host Read Commands: 0 00:30:47.113 Host Write Commands: 0 00:30:47.113 Controller Busy Time: 0 minutes 00:30:47.113 Power Cycles: 0 00:30:47.113 Power On Hours: 0 hours 00:30:47.113 Unsafe Shutdowns: 0 00:30:47.113 Unrecoverable Media Errors: 0 00:30:47.113 Lifetime Error Log Entries: 0 00:30:47.113 Warning Temperature Time: 0 minutes 00:30:47.113 Critical Temperature Time: 0 minutes 00:30:47.113 00:30:47.113 Number of Queues 00:30:47.113 ================ 00:30:47.113 Number of I/O Submission Queues: 127 00:30:47.113 Number of I/O Completion Queues: 127 00:30:47.113 00:30:47.113 Active Namespaces 00:30:47.113 ================= 00:30:47.113 Namespace ID:1 00:30:47.113 Error Recovery Timeout: Unlimited 00:30:47.113 Command Set Identifier: NVM (00h) 00:30:47.113 Deallocate: Supported 00:30:47.113 Deallocated/Unwritten Error: Not Supported 00:30:47.113 Deallocated Read Value: Unknown 00:30:47.113 Deallocate in Write Zeroes: Not Supported 00:30:47.113 Deallocated Guard Field: 0xFFFF 00:30:47.113 Flush: Supported 00:30:47.113 Reservation: Supported 00:30:47.113 Namespace Sharing Capabilities: Multiple Controllers 00:30:47.113 Size (in LBAs): 131072 (0GiB) 00:30:47.113 Capacity (in LBAs): 131072 (0GiB) 00:30:47.113 Utilization (in LBAs): 131072 (0GiB) 00:30:47.113 NGUID: ABCDEF0123456789ABCDEF0123456789 00:30:47.113 EUI64: ABCDEF0123456789 00:30:47.113 UUID: 86a1330c-6886-410c-b222-17baad355ffc 00:30:47.113 Thin Provisioning: Not Supported 00:30:47.113 Per-NS Atomic Units: Yes 00:30:47.113 Atomic Boundary Size (Normal): 0 00:30:47.113 Atomic Boundary Size (PFail): 0 00:30:47.113 Atomic Boundary Offset: 0 00:30:47.113 Maximum Single Source Range Length: 65535 00:30:47.113 Maximum Copy Length: 65535 00:30:47.113 Maximum Source Range Count: 1 00:30:47.113 NGUID/EUI64 Never Reused: No 00:30:47.113 Namespace Write Protected: No 00:30:47.113 Number of LBA Formats: 1 00:30:47.113 Current LBA Format: LBA Format #00 00:30:47.113 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:47.113 00:30:47.113 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:30:47.113 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:47.113 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.113 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:47.113 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.113 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:30:47.113 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:30:47.113 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:47.113 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:30:47.113 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:47.113 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:30:47.113 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:47.113 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:47.113 rmmod nvme_tcp 00:30:47.113 rmmod nvme_fabrics 00:30:47.113 rmmod nvme_keyring 00:30:47.113 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:47.113 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:30:47.113 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:30:47.113 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3072183 ']' 00:30:47.113 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3072183 00:30:47.113 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 3072183 ']' 00:30:47.113 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 3072183 00:30:47.113 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:30:47.113 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:47.113 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3072183 00:30:47.113 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:47.113 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:47.113 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3072183' 00:30:47.113 killing process with pid 3072183 00:30:47.113 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 3072183 00:30:47.113 11:59:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 3072183 00:30:48.490 11:59:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:48.490 11:59:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:48.490 11:59:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:48.490 11:59:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:30:48.490 11:59:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:30:48.490 11:59:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:48.490 11:59:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:30:48.490 11:59:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:48.490 11:59:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:48.490 11:59:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:48.490 11:59:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:48.490 11:59:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:50.393 11:59:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:50.393 00:30:50.393 real 0m7.735s 00:30:50.393 user 0m11.577s 00:30:50.393 sys 0m2.266s 00:30:50.393 11:59:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:50.393 11:59:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:50.393 ************************************ 00:30:50.393 END TEST nvmf_identify 00:30:50.393 ************************************ 00:30:50.393 11:59:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:50.393 11:59:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:50.393 11:59:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:50.394 11:59:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.394 ************************************ 00:30:50.394 START TEST nvmf_perf 00:30:50.394 ************************************ 00:30:50.394 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:50.652 * Looking for test storage... 00:30:50.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:50.652 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:50.652 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:30:50.652 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:50.652 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:50.652 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:50.652 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:50.652 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:50.652 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:30:50.652 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:30:50.652 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:30:50.652 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:30:50.652 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:30:50.652 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:30:50.652 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:30:50.652 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:50.652 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:30:50.652 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:30:50.652 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:50.652 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:50.652 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:30:50.652 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:30:50.652 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:50.652 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:30:50.652 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:50.652 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:30:50.652 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:30:50.652 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:50.652 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:30:50.652 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:50.652 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:50.652 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:50.652 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:30:50.652 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:50.652 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:50.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.652 --rc genhtml_branch_coverage=1 00:30:50.652 --rc genhtml_function_coverage=1 00:30:50.652 --rc genhtml_legend=1 00:30:50.652 --rc geninfo_all_blocks=1 00:30:50.652 --rc geninfo_unexecuted_blocks=1 00:30:50.652 00:30:50.652 ' 00:30:50.652 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:50.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.652 --rc genhtml_branch_coverage=1 00:30:50.652 --rc genhtml_function_coverage=1 00:30:50.652 --rc genhtml_legend=1 00:30:50.652 --rc geninfo_all_blocks=1 00:30:50.652 --rc geninfo_unexecuted_blocks=1 00:30:50.652 00:30:50.652 ' 00:30:50.652 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:50.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.652 --rc genhtml_branch_coverage=1 00:30:50.652 --rc genhtml_function_coverage=1 00:30:50.652 --rc genhtml_legend=1 00:30:50.652 --rc geninfo_all_blocks=1 00:30:50.652 --rc geninfo_unexecuted_blocks=1 00:30:50.652 00:30:50.652 ' 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:50.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.653 --rc genhtml_branch_coverage=1 00:30:50.653 --rc genhtml_function_coverage=1 00:30:50.653 --rc genhtml_legend=1 00:30:50.653 --rc geninfo_all_blocks=1 00:30:50.653 --rc geninfo_unexecuted_blocks=1 00:30:50.653 00:30:50.653 ' 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:50.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:30:50.653 11:59:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:52.557 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:52.557 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:52.557 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:52.557 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:52.558 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:52.558 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:52.558 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:52.558 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:52.558 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:52.558 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:52.558 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:52.558 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:52.558 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:30:52.558 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:52.558 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:52.558 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:52.558 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:52.558 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:52.558 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:52.558 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:52.558 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:52.558 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:52.558 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:52.558 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:52.558 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:52.558 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:52.558 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:52.558 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:52.558 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:52.558 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:52.558 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:52.816 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:52.816 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:52.816 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:52.816 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:52.816 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:52.816 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:52.816 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:52.816 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:52.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:52.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.369 ms 00:30:52.816 00:30:52.816 --- 10.0.0.2 ping statistics --- 00:30:52.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:52.816 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:30:52.816 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:52.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:52.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:30:52.816 00:30:52.816 --- 10.0.0.1 ping statistics --- 00:30:52.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:52.816 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:30:52.816 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:52.816 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:30:52.816 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:52.816 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:52.816 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:52.816 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:52.816 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:52.816 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:52.816 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:52.816 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:30:52.816 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:52.816 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:52.816 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:52.816 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3074537 00:30:52.816 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:52.816 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3074537 00:30:52.816 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 3074537 ']' 00:30:52.816 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:52.816 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:52.816 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:52.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:52.816 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:52.816 11:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:52.816 [2024-11-18 11:59:18.661474] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:30:52.816 [2024-11-18 11:59:18.661663] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:53.074 [2024-11-18 11:59:18.812158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:53.074 [2024-11-18 11:59:18.939180] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:53.074 [2024-11-18 11:59:18.939262] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:53.074 [2024-11-18 11:59:18.939285] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:53.074 [2024-11-18 11:59:18.939309] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:53.074 [2024-11-18 11:59:18.939326] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:53.074 [2024-11-18 11:59:18.941741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:53.074 [2024-11-18 11:59:18.941765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:53.074 [2024-11-18 11:59:18.941828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:53.074 [2024-11-18 11:59:18.941833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:54.008 11:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:54.008 11:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:30:54.008 11:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:54.008 11:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:54.008 11:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:54.008 11:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:54.008 11:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:54.008 11:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:57.388 11:59:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:57.388 11:59:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:57.388 11:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:30:57.388 11:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:57.977 11:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:57.977 11:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:30:57.977 11:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:57.977 11:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:57.977 11:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:57.977 [2024-11-18 11:59:23.826213] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:57.977 11:59:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:58.236 11:59:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:58.236 11:59:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:58.804 11:59:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:58.804 11:59:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:58.804 11:59:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:59.062 [2024-11-18 11:59:24.928457] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:59.320 11:59:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:59.578 11:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:30:59.578 11:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:59.578 11:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:59.578 11:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:31:00.954 Initializing NVMe Controllers 00:31:00.954 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:31:00.954 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:31:00.954 Initialization complete. Launching workers. 00:31:00.954 ======================================================== 00:31:00.954 Latency(us) 00:31:00.954 Device Information : IOPS MiB/s Average min max 00:31:00.954 PCIE (0000:88:00.0) NSID 1 from core 0: 73535.96 287.25 434.35 44.88 7304.50 00:31:00.954 ======================================================== 00:31:00.954 Total : 73535.96 287.25 434.35 44.88 7304.50 00:31:00.954 00:31:00.954 11:59:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:02.332 Initializing NVMe Controllers 00:31:02.332 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:02.332 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:02.332 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:02.332 Initialization complete. Launching workers. 00:31:02.332 ======================================================== 00:31:02.332 Latency(us) 00:31:02.332 Device Information : IOPS MiB/s Average min max 00:31:02.332 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 63.00 0.25 16278.76 199.48 45822.92 00:31:02.332 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 59.00 0.23 17345.28 4872.37 47911.87 00:31:02.332 ======================================================== 00:31:02.332 Total : 122.00 0.48 16794.53 199.48 47911.87 00:31:02.332 00:31:02.592 11:59:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:03.967 Initializing NVMe Controllers 00:31:03.967 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:03.967 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:03.967 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:03.967 Initialization complete. Launching workers. 00:31:03.967 ======================================================== 00:31:03.967 Latency(us) 00:31:03.967 Device Information : IOPS MiB/s Average min max 00:31:03.967 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5440.84 21.25 5882.52 949.86 12068.53 00:31:03.967 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3785.89 14.79 8483.46 5615.06 17526.34 00:31:03.967 ======================================================== 00:31:03.967 Total : 9226.73 36.04 6949.73 949.86 17526.34 00:31:03.967 00:31:03.967 11:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:31:03.967 11:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:31:03.967 11:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:06.501 Initializing NVMe Controllers 00:31:06.501 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:06.501 Controller IO queue size 128, less than required. 00:31:06.501 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:06.501 Controller IO queue size 128, less than required. 00:31:06.501 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:06.501 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:06.501 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:06.501 Initialization complete. Launching workers. 00:31:06.501 ======================================================== 00:31:06.501 Latency(us) 00:31:06.501 Device Information : IOPS MiB/s Average min max 00:31:06.501 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1318.78 329.69 103125.76 70144.64 319333.10 00:31:06.501 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 517.82 129.46 257260.08 137511.40 486970.56 00:31:06.501 ======================================================== 00:31:06.501 Total : 1836.60 459.15 146583.42 70144.64 486970.56 00:31:06.501 00:31:06.760 11:59:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:31:07.018 No valid NVMe controllers or AIO or URING devices found 00:31:07.018 Initializing NVMe Controllers 00:31:07.018 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:07.018 Controller IO queue size 128, less than required. 00:31:07.018 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:07.018 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:31:07.018 Controller IO queue size 128, less than required. 00:31:07.018 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:07.018 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:31:07.018 WARNING: Some requested NVMe devices were skipped 00:31:07.276 11:59:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:31:10.557 Initializing NVMe Controllers 00:31:10.557 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:10.557 Controller IO queue size 128, less than required. 00:31:10.557 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:10.557 Controller IO queue size 128, less than required. 00:31:10.557 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:10.557 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:10.557 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:10.557 Initialization complete. Launching workers. 00:31:10.557 00:31:10.557 ==================== 00:31:10.557 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:31:10.557 TCP transport: 00:31:10.557 polls: 4227 00:31:10.557 idle_polls: 1904 00:31:10.557 sock_completions: 2323 00:31:10.557 nvme_completions: 4431 00:31:10.557 submitted_requests: 6636 00:31:10.557 queued_requests: 1 00:31:10.557 00:31:10.557 ==================== 00:31:10.557 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:31:10.557 TCP transport: 00:31:10.557 polls: 9161 00:31:10.557 idle_polls: 6504 00:31:10.557 sock_completions: 2657 00:31:10.557 nvme_completions: 5091 00:31:10.557 submitted_requests: 7670 00:31:10.557 queued_requests: 1 00:31:10.557 ======================================================== 00:31:10.557 Latency(us) 00:31:10.557 Device Information : IOPS MiB/s Average min max 00:31:10.558 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1107.38 276.85 122651.65 78812.21 415828.41 00:31:10.558 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1272.37 318.09 104241.29 57292.91 413097.34 00:31:10.558 ======================================================== 00:31:10.558 Total : 2379.75 594.94 112808.30 57292.91 415828.41 00:31:10.558 00:31:10.558 11:59:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:31:10.558 11:59:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:10.558 11:59:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:31:10.558 11:59:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:31:10.558 11:59:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:31:13.841 11:59:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=98a5b3f6-71f0-49c6-b13a-d8d95644fc0b 00:31:13.841 11:59:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 98a5b3f6-71f0-49c6-b13a-d8d95644fc0b 00:31:13.841 11:59:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=98a5b3f6-71f0-49c6-b13a-d8d95644fc0b 00:31:13.841 11:59:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:13.841 11:59:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:31:13.841 11:59:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:31:13.841 11:59:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:14.099 11:59:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:14.099 { 00:31:14.099 "uuid": "98a5b3f6-71f0-49c6-b13a-d8d95644fc0b", 00:31:14.099 "name": "lvs_0", 00:31:14.099 "base_bdev": "Nvme0n1", 00:31:14.099 "total_data_clusters": 238234, 00:31:14.099 "free_clusters": 238234, 00:31:14.099 "block_size": 512, 00:31:14.099 "cluster_size": 4194304 00:31:14.099 } 00:31:14.099 ]' 00:31:14.099 11:59:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="98a5b3f6-71f0-49c6-b13a-d8d95644fc0b") .free_clusters' 00:31:14.099 11:59:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=238234 00:31:14.099 11:59:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="98a5b3f6-71f0-49c6-b13a-d8d95644fc0b") .cluster_size' 00:31:14.099 11:59:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:14.099 11:59:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=952936 00:31:14.099 11:59:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 952936 00:31:14.099 952936 00:31:14.099 11:59:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:31:14.099 11:59:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:31:14.099 11:59:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 98a5b3f6-71f0-49c6-b13a-d8d95644fc0b lbd_0 20480 00:31:14.666 11:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=e7e21ec6-dfbe-4d66-8c7c-7ce24306c0cc 00:31:14.666 11:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore e7e21ec6-dfbe-4d66-8c7c-7ce24306c0cc lvs_n_0 00:31:15.600 11:59:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=9ea2167c-85c2-42ee-9c57-14790043fe63 00:31:15.600 11:59:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 9ea2167c-85c2-42ee-9c57-14790043fe63 00:31:15.600 11:59:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=9ea2167c-85c2-42ee-9c57-14790043fe63 00:31:15.600 11:59:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:15.600 11:59:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:31:15.600 11:59:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:31:15.600 11:59:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:15.858 11:59:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:15.858 { 00:31:15.858 "uuid": "98a5b3f6-71f0-49c6-b13a-d8d95644fc0b", 00:31:15.858 "name": "lvs_0", 00:31:15.858 "base_bdev": "Nvme0n1", 00:31:15.858 "total_data_clusters": 238234, 00:31:15.858 "free_clusters": 233114, 00:31:15.858 "block_size": 512, 00:31:15.858 "cluster_size": 4194304 00:31:15.858 }, 00:31:15.858 { 00:31:15.858 "uuid": "9ea2167c-85c2-42ee-9c57-14790043fe63", 00:31:15.858 "name": "lvs_n_0", 00:31:15.858 "base_bdev": "e7e21ec6-dfbe-4d66-8c7c-7ce24306c0cc", 00:31:15.858 "total_data_clusters": 5114, 00:31:15.858 "free_clusters": 5114, 00:31:15.858 "block_size": 512, 00:31:15.858 "cluster_size": 4194304 00:31:15.858 } 00:31:15.858 ]' 00:31:15.858 11:59:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="9ea2167c-85c2-42ee-9c57-14790043fe63") .free_clusters' 00:31:15.858 11:59:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:31:15.858 11:59:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="9ea2167c-85c2-42ee-9c57-14790043fe63") .cluster_size' 00:31:15.858 11:59:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:15.858 11:59:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:31:15.858 11:59:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:31:15.858 20456 00:31:15.858 11:59:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:31:15.858 11:59:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9ea2167c-85c2-42ee-9c57-14790043fe63 lbd_nest_0 20456 00:31:16.117 11:59:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=8db5e8ad-d806-4c10-89be-e137a656d20d 00:31:16.117 11:59:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:16.375 11:59:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:31:16.375 11:59:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 8db5e8ad-d806-4c10-89be-e137a656d20d 00:31:16.632 11:59:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:16.890 11:59:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:31:16.890 11:59:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:31:16.890 11:59:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:16.890 11:59:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:16.890 11:59:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:29.119 Initializing NVMe Controllers 00:31:29.119 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:29.119 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:29.119 Initialization complete. Launching workers. 00:31:29.119 ======================================================== 00:31:29.119 Latency(us) 00:31:29.119 Device Information : IOPS MiB/s Average min max 00:31:29.119 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.80 0.02 20922.79 239.70 47883.95 00:31:29.119 ======================================================== 00:31:29.119 Total : 47.80 0.02 20922.79 239.70 47883.95 00:31:29.119 00:31:29.119 11:59:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:29.119 11:59:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:39.099 Initializing NVMe Controllers 00:31:39.099 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:39.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:39.099 Initialization complete. Launching workers. 00:31:39.099 ======================================================== 00:31:39.099 Latency(us) 00:31:39.099 Device Information : IOPS MiB/s Average min max 00:31:39.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 67.50 8.44 14814.49 7016.53 54855.93 00:31:39.099 ======================================================== 00:31:39.099 Total : 67.50 8.44 14814.49 7016.53 54855.93 00:31:39.099 00:31:39.099 12:00:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:39.099 12:00:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:39.099 12:00:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:49.073 Initializing NVMe Controllers 00:31:49.073 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:49.073 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:49.073 Initialization complete. Launching workers. 00:31:49.073 ======================================================== 00:31:49.073 Latency(us) 00:31:49.073 Device Information : IOPS MiB/s Average min max 00:31:49.073 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4766.74 2.33 6716.38 657.45 16121.50 00:31:49.073 ======================================================== 00:31:49.073 Total : 4766.74 2.33 6716.38 657.45 16121.50 00:31:49.073 00:31:49.073 12:00:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:49.073 12:00:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:59.057 Initializing NVMe Controllers 00:31:59.057 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:59.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:59.057 Initialization complete. Launching workers. 00:31:59.057 ======================================================== 00:31:59.057 Latency(us) 00:31:59.057 Device Information : IOPS MiB/s Average min max 00:31:59.057 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3495.27 436.91 9156.15 1000.71 24104.09 00:31:59.057 ======================================================== 00:31:59.057 Total : 3495.27 436.91 9156.15 1000.71 24104.09 00:31:59.057 00:31:59.057 12:00:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:59.057 12:00:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:59.057 12:00:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:11.329 Initializing NVMe Controllers 00:32:11.329 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:11.329 Controller IO queue size 128, less than required. 00:32:11.329 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:11.329 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:11.329 Initialization complete. Launching workers. 00:32:11.329 ======================================================== 00:32:11.329 Latency(us) 00:32:11.329 Device Information : IOPS MiB/s Average min max 00:32:11.329 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8493.16 4.15 15078.38 2078.09 33443.12 00:32:11.329 ======================================================== 00:32:11.329 Total : 8493.16 4.15 15078.38 2078.09 33443.12 00:32:11.329 00:32:11.329 12:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:11.329 12:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:21.304 Initializing NVMe Controllers 00:32:21.304 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:21.304 Controller IO queue size 128, less than required. 00:32:21.304 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:21.304 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:21.304 Initialization complete. Launching workers. 00:32:21.304 ======================================================== 00:32:21.304 Latency(us) 00:32:21.304 Device Information : IOPS MiB/s Average min max 00:32:21.304 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1151.38 143.92 111721.66 15603.17 240466.94 00:32:21.304 ======================================================== 00:32:21.304 Total : 1151.38 143.92 111721.66 15603.17 240466.94 00:32:21.304 00:32:21.304 12:00:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:21.304 12:00:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8db5e8ad-d806-4c10-89be-e137a656d20d 00:32:21.304 12:00:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:21.304 12:00:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e7e21ec6-dfbe-4d66-8c7c-7ce24306c0cc 00:32:21.562 12:00:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:21.820 12:00:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:32:21.820 12:00:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:32:21.820 12:00:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:21.820 12:00:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:32:21.820 12:00:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:21.820 12:00:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:32:21.820 12:00:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:21.820 12:00:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:21.820 rmmod nvme_tcp 00:32:21.820 rmmod nvme_fabrics 00:32:21.820 rmmod nvme_keyring 00:32:22.078 12:00:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:22.078 12:00:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:32:22.078 12:00:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:32:22.078 12:00:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3074537 ']' 00:32:22.078 12:00:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3074537 00:32:22.078 12:00:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 3074537 ']' 00:32:22.078 12:00:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 3074537 00:32:22.078 12:00:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:32:22.078 12:00:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:22.078 12:00:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3074537 00:32:22.078 12:00:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:22.078 12:00:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:22.078 12:00:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3074537' 00:32:22.078 killing process with pid 3074537 00:32:22.078 12:00:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 3074537 00:32:22.078 12:00:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 3074537 00:32:24.612 12:00:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:24.612 12:00:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:24.612 12:00:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:24.612 12:00:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:32:24.612 12:00:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:32:24.612 12:00:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:24.612 12:00:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:32:24.612 12:00:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:24.612 12:00:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:24.612 12:00:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:24.612 12:00:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:24.612 12:00:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:26.521 12:00:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:26.521 00:32:26.521 real 1m35.930s 00:32:26.521 user 5m53.459s 00:32:26.521 sys 0m16.315s 00:32:26.521 12:00:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:26.521 12:00:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:26.521 ************************************ 00:32:26.521 END TEST nvmf_perf 00:32:26.521 ************************************ 00:32:26.521 12:00:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.522 ************************************ 00:32:26.522 START TEST nvmf_fio_host 00:32:26.522 ************************************ 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:26.522 * Looking for test storage... 00:32:26.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:26.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.522 --rc genhtml_branch_coverage=1 00:32:26.522 --rc genhtml_function_coverage=1 00:32:26.522 --rc genhtml_legend=1 00:32:26.522 --rc geninfo_all_blocks=1 00:32:26.522 --rc geninfo_unexecuted_blocks=1 00:32:26.522 00:32:26.522 ' 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:26.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.522 --rc genhtml_branch_coverage=1 00:32:26.522 --rc genhtml_function_coverage=1 00:32:26.522 --rc genhtml_legend=1 00:32:26.522 --rc geninfo_all_blocks=1 00:32:26.522 --rc geninfo_unexecuted_blocks=1 00:32:26.522 00:32:26.522 ' 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:26.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.522 --rc genhtml_branch_coverage=1 00:32:26.522 --rc genhtml_function_coverage=1 00:32:26.522 --rc genhtml_legend=1 00:32:26.522 --rc geninfo_all_blocks=1 00:32:26.522 --rc geninfo_unexecuted_blocks=1 00:32:26.522 00:32:26.522 ' 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:26.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.522 --rc genhtml_branch_coverage=1 00:32:26.522 --rc genhtml_function_coverage=1 00:32:26.522 --rc genhtml_legend=1 00:32:26.522 --rc geninfo_all_blocks=1 00:32:26.522 --rc geninfo_unexecuted_blocks=1 00:32:26.522 00:32:26.522 ' 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:26.522 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:26.523 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.523 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.523 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.523 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:26.523 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.523 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:32:26.523 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:26.523 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:26.523 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:26.523 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:26.523 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:26.523 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:26.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:26.523 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:26.523 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:26.523 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:26.523 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:26.523 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:32:26.523 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:26.523 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:26.523 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:26.523 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:26.523 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:26.523 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:26.523 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:26.523 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:26.523 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:26.523 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:26.523 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:32:26.523 12:00:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.057 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:29.058 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:29.058 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:29.058 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:29.058 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:29.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:29.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:32:29.058 00:32:29.058 --- 10.0.0.2 ping statistics --- 00:32:29.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:29.058 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:29.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:29.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:32:29.058 00:32:29.058 --- 10.0.0.1 ping statistics --- 00:32:29.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:29.058 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:29.058 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:29.059 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:29.059 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:32:29.059 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:32:29.059 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:29.059 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.059 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3087658 00:32:29.059 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:29.059 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:29.059 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3087658 00:32:29.059 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3087658 ']' 00:32:29.059 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:29.059 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:29.059 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:29.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:29.059 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:29.059 12:00:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.059 [2024-11-18 12:00:54.579438] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:32:29.059 [2024-11-18 12:00:54.579588] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:29.059 [2024-11-18 12:00:54.718715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:29.059 [2024-11-18 12:00:54.857672] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:29.059 [2024-11-18 12:00:54.857748] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:29.059 [2024-11-18 12:00:54.857774] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:29.059 [2024-11-18 12:00:54.857800] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:29.059 [2024-11-18 12:00:54.857819] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:29.059 [2024-11-18 12:00:54.860628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:29.059 [2024-11-18 12:00:54.860699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:29.059 [2024-11-18 12:00:54.860724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:29.059 [2024-11-18 12:00:54.860731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:29.994 12:00:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:29.994 12:00:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:32:29.995 12:00:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:29.995 [2024-11-18 12:00:55.825980] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:29.995 12:00:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:32:29.995 12:00:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:29.995 12:00:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.252 12:00:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:32:30.510 Malloc1 00:32:30.510 12:00:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:30.767 12:00:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:31.025 12:00:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:31.327 [2024-11-18 12:00:57.026289] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:31.327 12:00:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:31.584 12:00:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:31.585 12:00:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:31.585 12:00:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:31.585 12:00:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:31.585 12:00:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:31.585 12:00:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:31.585 12:00:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:31.585 12:00:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:31.585 12:00:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:31.585 12:00:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:31.585 12:00:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:31.585 12:00:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:31.585 12:00:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:31.585 12:00:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:31.585 12:00:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:31.585 12:00:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:31.585 12:00:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:31.585 12:00:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:31.842 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:31.842 fio-3.35 00:32:31.842 Starting 1 thread 00:32:34.368 00:32:34.368 test: (groupid=0, jobs=1): err= 0: pid=3088138: Mon Nov 18 12:00:59 2024 00:32:34.368 read: IOPS=6475, BW=25.3MiB/s (26.5MB/s)(50.8MiB/2009msec) 00:32:34.368 slat (usec): min=3, max=125, avg= 3.67, stdev= 1.77 00:32:34.368 clat (usec): min=3462, max=18841, avg=10678.70, stdev=940.15 00:32:34.368 lat (usec): min=3490, max=18845, avg=10682.37, stdev=940.07 00:32:34.368 clat percentiles (usec): 00:32:34.368 | 1.00th=[ 8586], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[ 9896], 00:32:34.368 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10683], 60.00th=[10945], 00:32:34.368 | 70.00th=[11207], 80.00th=[11338], 90.00th=[11731], 95.00th=[12125], 00:32:34.368 | 99.00th=[12780], 99.50th=[13173], 99.90th=[16057], 99.95th=[17433], 00:32:34.368 | 99.99th=[18744] 00:32:34.368 bw ( KiB/s): min=24544, max=26512, per=99.92%, avg=25880.00, stdev=927.38, samples=4 00:32:34.368 iops : min= 6136, max= 6628, avg=6470.00, stdev=231.84, samples=4 00:32:34.369 write: IOPS=6482, BW=25.3MiB/s (26.6MB/s)(50.9MiB/2009msec); 0 zone resets 00:32:34.369 slat (usec): min=3, max=104, avg= 3.82, stdev= 1.47 00:32:34.369 clat (usec): min=1249, max=17225, avg=8942.15, stdev=783.83 00:32:34.369 lat (usec): min=1264, max=17229, avg=8945.97, stdev=783.82 00:32:34.369 clat percentiles (usec): 00:32:34.369 | 1.00th=[ 7242], 5.00th=[ 7767], 10.00th=[ 8029], 20.00th=[ 8356], 00:32:34.369 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9110], 00:32:34.369 | 70.00th=[ 9372], 80.00th=[ 9503], 90.00th=[ 9765], 95.00th=[10028], 00:32:34.369 | 99.00th=[10552], 99.50th=[10945], 99.90th=[14615], 99.95th=[16057], 00:32:34.369 | 99.99th=[16319] 00:32:34.369 bw ( KiB/s): min=25728, max=26264, per=100.00%, avg=25932.00, stdev=249.63, samples=4 00:32:34.369 iops : min= 6432, max= 6566, avg=6483.00, stdev=62.41, samples=4 00:32:34.369 lat (msec) : 2=0.01%, 4=0.11%, 10=58.32%, 20=41.56% 00:32:34.369 cpu : usr=71.26%, sys=27.19%, ctx=63, majf=0, minf=1546 00:32:34.369 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:34.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:34.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:34.369 issued rwts: total=13009,13024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:34.369 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:34.369 00:32:34.369 Run status group 0 (all jobs): 00:32:34.369 READ: bw=25.3MiB/s (26.5MB/s), 25.3MiB/s-25.3MiB/s (26.5MB/s-26.5MB/s), io=50.8MiB (53.3MB), run=2009-2009msec 00:32:34.369 WRITE: bw=25.3MiB/s (26.6MB/s), 25.3MiB/s-25.3MiB/s (26.6MB/s-26.6MB/s), io=50.9MiB (53.3MB), run=2009-2009msec 00:32:34.626 ----------------------------------------------------- 00:32:34.626 Suppressions used: 00:32:34.626 count bytes template 00:32:34.626 1 57 /usr/src/fio/parse.c 00:32:34.626 1 8 libtcmalloc_minimal.so 00:32:34.626 ----------------------------------------------------- 00:32:34.626 00:32:34.626 12:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:34.626 12:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:34.626 12:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:34.626 12:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:34.626 12:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:34.626 12:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:34.626 12:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:34.626 12:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:34.626 12:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:34.626 12:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:34.626 12:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:34.626 12:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:34.626 12:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:34.626 12:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:34.626 12:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:34.626 12:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:34.626 12:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:34.884 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:32:34.884 fio-3.35 00:32:34.884 Starting 1 thread 00:32:37.414 00:32:37.414 test: (groupid=0, jobs=1): err= 0: pid=3088586: Mon Nov 18 12:01:03 2024 00:32:37.414 read: IOPS=6177, BW=96.5MiB/s (101MB/s)(194MiB/2011msec) 00:32:37.414 slat (usec): min=3, max=145, avg= 5.18, stdev= 2.52 00:32:37.414 clat (usec): min=3038, max=22421, avg=11788.19, stdev=2562.26 00:32:37.414 lat (usec): min=3043, max=22426, avg=11793.36, stdev=2562.37 00:32:37.414 clat percentiles (usec): 00:32:37.414 | 1.00th=[ 6325], 5.00th=[ 7767], 10.00th=[ 8717], 20.00th=[ 9765], 00:32:37.414 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11469], 60.00th=[12125], 00:32:37.414 | 70.00th=[12911], 80.00th=[13829], 90.00th=[15270], 95.00th=[16319], 00:32:37.414 | 99.00th=[18220], 99.50th=[19006], 99.90th=[21103], 99.95th=[21365], 00:32:37.414 | 99.99th=[22414] 00:32:37.414 bw ( KiB/s): min=40384, max=58368, per=49.70%, avg=49128.00, stdev=9369.22, samples=4 00:32:37.414 iops : min= 2524, max= 3648, avg=3070.50, stdev=585.58, samples=4 00:32:37.414 write: IOPS=3592, BW=56.1MiB/s (58.9MB/s)(101MiB/1798msec); 0 zone resets 00:32:37.414 slat (usec): min=32, max=259, avg=36.98, stdev= 7.71 00:32:37.414 clat (usec): min=7076, max=23366, avg=15943.75, stdev=2601.96 00:32:37.414 lat (usec): min=7109, max=23405, avg=15980.73, stdev=2602.40 00:32:37.414 clat percentiles (usec): 00:32:37.414 | 1.00th=[10290], 5.00th=[11863], 10.00th=[12649], 20.00th=[13566], 00:32:37.414 | 30.00th=[14615], 40.00th=[15401], 50.00th=[16057], 60.00th=[16581], 00:32:37.414 | 70.00th=[17171], 80.00th=[18220], 90.00th=[19268], 95.00th=[20579], 00:32:37.414 | 99.00th=[22152], 99.50th=[22414], 99.90th=[22676], 99.95th=[22938], 00:32:37.414 | 99.99th=[23462] 00:32:37.414 bw ( KiB/s): min=42112, max=60288, per=89.01%, avg=51160.00, stdev=9458.85, samples=4 00:32:37.414 iops : min= 2632, max= 3768, avg=3197.50, stdev=591.18, samples=4 00:32:37.414 lat (msec) : 4=0.13%, 10=15.18%, 20=82.24%, 50=2.45% 00:32:37.414 cpu : usr=77.06%, sys=21.54%, ctx=45, majf=0, minf=2117 00:32:37.414 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:32:37.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.414 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:37.414 issued rwts: total=12423,6459,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.414 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:37.414 00:32:37.414 Run status group 0 (all jobs): 00:32:37.414 READ: bw=96.5MiB/s (101MB/s), 96.5MiB/s-96.5MiB/s (101MB/s-101MB/s), io=194MiB (204MB), run=2011-2011msec 00:32:37.414 WRITE: bw=56.1MiB/s (58.9MB/s), 56.1MiB/s-56.1MiB/s (58.9MB/s-58.9MB/s), io=101MiB (106MB), run=1798-1798msec 00:32:37.673 ----------------------------------------------------- 00:32:37.673 Suppressions used: 00:32:37.673 count bytes template 00:32:37.673 1 57 /usr/src/fio/parse.c 00:32:37.673 224 21504 /usr/src/fio/iolog.c 00:32:37.673 1 8 libtcmalloc_minimal.so 00:32:37.673 ----------------------------------------------------- 00:32:37.673 00:32:37.673 12:01:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:37.932 12:01:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:32:37.932 12:01:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:32:37.932 12:01:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:32:37.932 12:01:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:37.932 12:01:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:32:37.932 12:01:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:37.932 12:01:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:37.932 12:01:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:37.932 12:01:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:37.932 12:01:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:32:37.932 12:01:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:32:41.220 Nvme0n1 00:32:41.220 12:01:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:32:44.507 12:01:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=fc07034e-c403-471d-b447-458be169a399 00:32:44.507 12:01:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb fc07034e-c403-471d-b447-458be169a399 00:32:44.507 12:01:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=fc07034e-c403-471d-b447-458be169a399 00:32:44.507 12:01:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:44.507 12:01:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:32:44.507 12:01:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:32:44.507 12:01:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:44.507 12:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:44.507 { 00:32:44.507 "uuid": "fc07034e-c403-471d-b447-458be169a399", 00:32:44.507 "name": "lvs_0", 00:32:44.507 "base_bdev": "Nvme0n1", 00:32:44.507 "total_data_clusters": 930, 00:32:44.507 "free_clusters": 930, 00:32:44.507 "block_size": 512, 00:32:44.507 "cluster_size": 1073741824 00:32:44.507 } 00:32:44.507 ]' 00:32:44.507 12:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="fc07034e-c403-471d-b447-458be169a399") .free_clusters' 00:32:44.507 12:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=930 00:32:44.507 12:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="fc07034e-c403-471d-b447-458be169a399") .cluster_size' 00:32:44.507 12:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:32:44.507 12:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=952320 00:32:44.507 12:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 952320 00:32:44.507 952320 00:32:44.507 12:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:32:44.766 a54f47e6-e7d3-4acf-862d-8315100da243 00:32:44.766 12:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:32:45.024 12:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:32:45.282 12:01:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:45.541 12:01:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:45.541 12:01:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:45.541 12:01:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:45.541 12:01:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:45.541 12:01:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:45.541 12:01:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:45.541 12:01:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:45.541 12:01:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:45.541 12:01:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:45.541 12:01:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:45.541 12:01:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:45.541 12:01:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:45.541 12:01:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:45.541 12:01:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:45.541 12:01:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:45.541 12:01:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:45.541 12:01:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:45.799 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:45.799 fio-3.35 00:32:45.799 Starting 1 thread 00:32:48.331 00:32:48.331 test: (groupid=0, jobs=1): err= 0: pid=3089981: Mon Nov 18 12:01:14 2024 00:32:48.331 read: IOPS=4397, BW=17.2MiB/s (18.0MB/s)(34.5MiB/2011msec) 00:32:48.331 slat (usec): min=3, max=201, avg= 3.90, stdev= 3.28 00:32:48.331 clat (usec): min=1452, max=173197, avg=15720.64, stdev=13231.91 00:32:48.331 lat (usec): min=1456, max=173268, avg=15724.54, stdev=13232.48 00:32:48.331 clat percentiles (msec): 00:32:48.331 | 1.00th=[ 11], 5.00th=[ 13], 10.00th=[ 13], 20.00th=[ 14], 00:32:48.331 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 15], 60.00th=[ 15], 00:32:48.331 | 70.00th=[ 16], 80.00th=[ 16], 90.00th=[ 17], 95.00th=[ 17], 00:32:48.331 | 99.00th=[ 20], 99.50th=[ 157], 99.90th=[ 174], 99.95th=[ 174], 00:32:48.331 | 99.99th=[ 174] 00:32:48.331 bw ( KiB/s): min=12488, max=19648, per=99.87%, avg=17568.00, stdev=3413.04, samples=4 00:32:48.331 iops : min= 3122, max= 4912, avg=4392.00, stdev=853.26, samples=4 00:32:48.331 write: IOPS=4400, BW=17.2MiB/s (18.0MB/s)(34.6MiB/2011msec); 0 zone resets 00:32:48.331 slat (usec): min=3, max=135, avg= 4.05, stdev= 2.35 00:32:48.331 clat (usec): min=452, max=170425, avg=13177.13, stdev=12426.32 00:32:48.331 lat (usec): min=456, max=170435, avg=13181.17, stdev=12426.88 00:32:48.331 clat percentiles (msec): 00:32:48.331 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 12], 00:32:48.331 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 13], 00:32:48.331 | 70.00th=[ 13], 80.00th=[ 14], 90.00th=[ 14], 95.00th=[ 14], 00:32:48.331 | 99.00th=[ 17], 99.50th=[ 159], 99.90th=[ 171], 99.95th=[ 171], 00:32:48.331 | 99.99th=[ 171] 00:32:48.331 bw ( KiB/s): min=13128, max=19144, per=99.79%, avg=17564.00, stdev=2960.54, samples=4 00:32:48.331 iops : min= 3282, max= 4786, avg=4391.00, stdev=740.13, samples=4 00:32:48.331 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:32:48.331 lat (msec) : 2=0.02%, 4=0.08%, 10=1.64%, 20=97.37%, 50=0.14% 00:32:48.331 lat (msec) : 250=0.72% 00:32:48.331 cpu : usr=67.41%, sys=31.24%, ctx=85, majf=0, minf=1544 00:32:48.331 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:32:48.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:48.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:48.331 issued rwts: total=8844,8849,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:48.331 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:48.331 00:32:48.331 Run status group 0 (all jobs): 00:32:48.331 READ: bw=17.2MiB/s (18.0MB/s), 17.2MiB/s-17.2MiB/s (18.0MB/s-18.0MB/s), io=34.5MiB (36.2MB), run=2011-2011msec 00:32:48.331 WRITE: bw=17.2MiB/s (18.0MB/s), 17.2MiB/s-17.2MiB/s (18.0MB/s-18.0MB/s), io=34.6MiB (36.2MB), run=2011-2011msec 00:32:48.589 ----------------------------------------------------- 00:32:48.589 Suppressions used: 00:32:48.589 count bytes template 00:32:48.589 1 58 /usr/src/fio/parse.c 00:32:48.589 1 8 libtcmalloc_minimal.so 00:32:48.589 ----------------------------------------------------- 00:32:48.589 00:32:48.589 12:01:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:48.847 12:01:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:32:50.221 12:01:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=6ef6d8ee-0ab8-4ed6-a1e1-a29edc54f4a9 00:32:50.221 12:01:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 6ef6d8ee-0ab8-4ed6-a1e1-a29edc54f4a9 00:32:50.222 12:01:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=6ef6d8ee-0ab8-4ed6-a1e1-a29edc54f4a9 00:32:50.222 12:01:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:50.222 12:01:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:32:50.222 12:01:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:32:50.222 12:01:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:50.480 12:01:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:50.480 { 00:32:50.480 "uuid": "fc07034e-c403-471d-b447-458be169a399", 00:32:50.480 "name": "lvs_0", 00:32:50.480 "base_bdev": "Nvme0n1", 00:32:50.480 "total_data_clusters": 930, 00:32:50.480 "free_clusters": 0, 00:32:50.480 "block_size": 512, 00:32:50.480 "cluster_size": 1073741824 00:32:50.480 }, 00:32:50.480 { 00:32:50.480 "uuid": "6ef6d8ee-0ab8-4ed6-a1e1-a29edc54f4a9", 00:32:50.480 "name": "lvs_n_0", 00:32:50.480 "base_bdev": "a54f47e6-e7d3-4acf-862d-8315100da243", 00:32:50.480 "total_data_clusters": 237847, 00:32:50.480 "free_clusters": 237847, 00:32:50.480 "block_size": 512, 00:32:50.480 "cluster_size": 4194304 00:32:50.480 } 00:32:50.480 ]' 00:32:50.480 12:01:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="6ef6d8ee-0ab8-4ed6-a1e1-a29edc54f4a9") .free_clusters' 00:32:50.480 12:01:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=237847 00:32:50.480 12:01:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="6ef6d8ee-0ab8-4ed6-a1e1-a29edc54f4a9") .cluster_size' 00:32:50.480 12:01:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:32:50.480 12:01:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=951388 00:32:50.480 12:01:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 951388 00:32:50.480 951388 00:32:50.480 12:01:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:32:51.858 01bd1c54-adb7-48b8-bde6-cb1cb98ee764 00:32:51.859 12:01:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:32:51.859 12:01:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:32:52.116 12:01:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:32:52.398 12:01:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:52.398 12:01:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:52.398 12:01:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:52.398 12:01:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:52.398 12:01:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:52.398 12:01:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:52.398 12:01:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:52.398 12:01:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:52.398 12:01:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:52.398 12:01:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:52.398 12:01:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:52.398 12:01:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:52.678 12:01:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:52.678 12:01:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:52.678 12:01:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:52.678 12:01:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:52.678 12:01:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:52.678 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:52.678 fio-3.35 00:32:52.678 Starting 1 thread 00:32:55.206 00:32:55.206 test: (groupid=0, jobs=1): err= 0: pid=3090841: Mon Nov 18 12:01:20 2024 00:32:55.206 read: IOPS=4394, BW=17.2MiB/s (18.0MB/s)(34.5MiB/2012msec) 00:32:55.206 slat (usec): min=3, max=148, avg= 3.85, stdev= 2.40 00:32:55.206 clat (usec): min=5980, max=26352, avg=15810.56, stdev=1549.51 00:32:55.206 lat (usec): min=5986, max=26356, avg=15814.41, stdev=1549.40 00:32:55.206 clat percentiles (usec): 00:32:55.206 | 1.00th=[12387], 5.00th=[13566], 10.00th=[13960], 20.00th=[14615], 00:32:55.206 | 30.00th=[15008], 40.00th=[15401], 50.00th=[15795], 60.00th=[16057], 00:32:55.206 | 70.00th=[16581], 80.00th=[17171], 90.00th=[17695], 95.00th=[18220], 00:32:55.206 | 99.00th=[19530], 99.50th=[20055], 99.90th=[23200], 99.95th=[25297], 00:32:55.206 | 99.99th=[26346] 00:32:55.207 bw ( KiB/s): min=16840, max=17976, per=99.88%, avg=17556.00, stdev=522.62, samples=4 00:32:55.207 iops : min= 4210, max= 4494, avg=4389.00, stdev=130.65, samples=4 00:32:55.207 write: IOPS=4395, BW=17.2MiB/s (18.0MB/s)(34.5MiB/2012msec); 0 zone resets 00:32:55.207 slat (usec): min=3, max=124, avg= 3.94, stdev= 1.82 00:32:55.207 clat (usec): min=2877, max=23224, avg=13162.07, stdev=1321.03 00:32:55.207 lat (usec): min=2884, max=23228, avg=13166.01, stdev=1320.99 00:32:55.207 clat percentiles (usec): 00:32:55.207 | 1.00th=[10290], 5.00th=[11338], 10.00th=[11731], 20.00th=[12125], 00:32:55.207 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13173], 60.00th=[13435], 00:32:55.207 | 70.00th=[13829], 80.00th=[14091], 90.00th=[14615], 95.00th=[15139], 00:32:55.207 | 99.00th=[16450], 99.50th=[17433], 99.90th=[22676], 99.95th=[22938], 00:32:55.207 | 99.99th=[23200] 00:32:55.207 bw ( KiB/s): min=17024, max=17816, per=99.87%, avg=17560.00, stdev=369.22, samples=4 00:32:55.207 iops : min= 4256, max= 4454, avg=4390.00, stdev=92.30, samples=4 00:32:55.207 lat (msec) : 4=0.02%, 10=0.43%, 20=99.19%, 50=0.37% 00:32:55.207 cpu : usr=67.73%, sys=30.98%, ctx=78, majf=0, minf=1542 00:32:55.207 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:32:55.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:55.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:55.207 issued rwts: total=8841,8844,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:55.207 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:55.207 00:32:55.207 Run status group 0 (all jobs): 00:32:55.207 READ: bw=17.2MiB/s (18.0MB/s), 17.2MiB/s-17.2MiB/s (18.0MB/s-18.0MB/s), io=34.5MiB (36.2MB), run=2012-2012msec 00:32:55.207 WRITE: bw=17.2MiB/s (18.0MB/s), 17.2MiB/s-17.2MiB/s (18.0MB/s-18.0MB/s), io=34.5MiB (36.2MB), run=2012-2012msec 00:32:55.465 ----------------------------------------------------- 00:32:55.465 Suppressions used: 00:32:55.465 count bytes template 00:32:55.465 1 58 /usr/src/fio/parse.c 00:32:55.465 1 8 libtcmalloc_minimal.so 00:32:55.465 ----------------------------------------------------- 00:32:55.465 00:32:55.465 12:01:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:32:55.723 12:01:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:32:55.723 12:01:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:33:00.994 12:01:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:00.994 12:01:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:33:03.527 12:01:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:03.527 12:01:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:33:05.432 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:05.432 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:33:05.432 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:33:05.432 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:05.432 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:33:05.432 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:05.432 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:33:05.432 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:05.432 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:05.432 rmmod nvme_tcp 00:33:05.432 rmmod nvme_fabrics 00:33:05.432 rmmod nvme_keyring 00:33:05.432 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:05.432 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:33:05.432 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:33:05.432 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3087658 ']' 00:33:05.432 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3087658 00:33:05.432 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3087658 ']' 00:33:05.432 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3087658 00:33:05.432 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:33:05.432 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:05.432 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3087658 00:33:05.432 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:05.432 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:05.432 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3087658' 00:33:05.432 killing process with pid 3087658 00:33:05.432 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3087658 00:33:05.432 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3087658 00:33:06.808 12:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:06.808 12:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:06.808 12:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:06.808 12:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:33:06.808 12:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:33:06.808 12:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:06.808 12:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:33:06.808 12:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:06.808 12:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:06.808 12:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:06.808 12:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:06.808 12:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.343 12:01:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:09.343 00:33:09.343 real 0m42.408s 00:33:09.343 user 2m42.510s 00:33:09.343 sys 0m8.275s 00:33:09.343 12:01:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:09.343 12:01:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.343 ************************************ 00:33:09.343 END TEST nvmf_fio_host 00:33:09.343 ************************************ 00:33:09.343 12:01:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:09.343 12:01:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:09.343 12:01:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:09.343 12:01:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.343 ************************************ 00:33:09.343 START TEST nvmf_failover 00:33:09.343 ************************************ 00:33:09.343 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:09.343 * Looking for test storage... 00:33:09.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:09.343 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:09.343 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:33:09.343 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:09.343 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:09.343 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:09.343 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:09.343 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:09.343 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:09.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.344 --rc genhtml_branch_coverage=1 00:33:09.344 --rc genhtml_function_coverage=1 00:33:09.344 --rc genhtml_legend=1 00:33:09.344 --rc geninfo_all_blocks=1 00:33:09.344 --rc geninfo_unexecuted_blocks=1 00:33:09.344 00:33:09.344 ' 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:09.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.344 --rc genhtml_branch_coverage=1 00:33:09.344 --rc genhtml_function_coverage=1 00:33:09.344 --rc genhtml_legend=1 00:33:09.344 --rc geninfo_all_blocks=1 00:33:09.344 --rc geninfo_unexecuted_blocks=1 00:33:09.344 00:33:09.344 ' 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:09.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.344 --rc genhtml_branch_coverage=1 00:33:09.344 --rc genhtml_function_coverage=1 00:33:09.344 --rc genhtml_legend=1 00:33:09.344 --rc geninfo_all_blocks=1 00:33:09.344 --rc geninfo_unexecuted_blocks=1 00:33:09.344 00:33:09.344 ' 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:09.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.344 --rc genhtml_branch_coverage=1 00:33:09.344 --rc genhtml_function_coverage=1 00:33:09.344 --rc genhtml_legend=1 00:33:09.344 --rc geninfo_all_blocks=1 00:33:09.344 --rc geninfo_unexecuted_blocks=1 00:33:09.344 00:33:09.344 ' 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:09.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.344 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:09.345 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.345 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:09.345 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:09.345 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:33:09.345 12:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:11.246 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:11.246 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:33:11.246 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:11.246 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:11.246 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:11.246 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:11.246 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:11.246 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:33:11.246 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:11.246 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:33:11.246 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:33:11.246 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:33:11.246 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:33:11.246 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:33:11.246 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:33:11.246 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:11.246 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:11.247 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:11.247 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:11.247 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:11.247 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:11.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:11.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:33:11.247 00:33:11.247 --- 10.0.0.2 ping statistics --- 00:33:11.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:11.247 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:11.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:11.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:33:11.247 00:33:11.247 --- 10.0.0.1 ping statistics --- 00:33:11.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:11.247 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3094341 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3094341 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3094341 ']' 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:11.247 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:11.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:11.248 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:11.248 12:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:11.248 [2024-11-18 12:01:37.071358] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:33:11.248 [2024-11-18 12:01:37.071527] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:11.507 [2024-11-18 12:01:37.214280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:11.507 [2024-11-18 12:01:37.337180] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:11.507 [2024-11-18 12:01:37.337254] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:11.507 [2024-11-18 12:01:37.337274] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:11.507 [2024-11-18 12:01:37.337293] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:11.507 [2024-11-18 12:01:37.337309] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:11.507 [2024-11-18 12:01:37.341527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:11.507 [2024-11-18 12:01:37.341595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:11.507 [2024-11-18 12:01:37.341596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:12.441 12:01:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:12.441 12:01:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:12.441 12:01:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:12.441 12:01:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:12.441 12:01:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:12.441 12:01:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:12.441 12:01:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:12.700 [2024-11-18 12:01:38.379541] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:12.700 12:01:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:12.958 Malloc0 00:33:12.958 12:01:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:13.216 12:01:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:13.474 12:01:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:13.733 [2024-11-18 12:01:39.552186] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:13.733 12:01:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:13.990 [2024-11-18 12:01:39.820974] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:13.990 12:01:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:14.248 [2024-11-18 12:01:40.094006] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:14.248 12:01:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3094653 00:33:14.248 12:01:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:33:14.248 12:01:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:14.248 12:01:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3094653 /var/tmp/bdevperf.sock 00:33:14.248 12:01:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3094653 ']' 00:33:14.248 12:01:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:14.248 12:01:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:14.248 12:01:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:14.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:14.248 12:01:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:14.248 12:01:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:15.626 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:15.626 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:15.626 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:15.885 NVMe0n1 00:33:15.885 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:16.452 00:33:16.452 12:01:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3094913 00:33:16.452 12:01:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:16.452 12:01:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:33:17.387 12:01:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:17.645 [2024-11-18 12:01:43.321568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.645 [2024-11-18 12:01:43.321656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.645 [2024-11-18 12:01:43.321689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.645 [2024-11-18 12:01:43.321708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.645 [2024-11-18 12:01:43.321726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.645 [2024-11-18 12:01:43.321744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.645 [2024-11-18 12:01:43.321762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.645 [2024-11-18 12:01:43.321780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.645 [2024-11-18 12:01:43.321798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.645 [2024-11-18 12:01:43.321826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.645 [2024-11-18 12:01:43.321845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.645 [2024-11-18 12:01:43.321863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.645 [2024-11-18 12:01:43.321881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.646 [2024-11-18 12:01:43.321898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.646 [2024-11-18 12:01:43.321916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.646 [2024-11-18 12:01:43.321933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.646 [2024-11-18 12:01:43.321951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.646 [2024-11-18 12:01:43.321969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.646 [2024-11-18 12:01:43.321986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.646 [2024-11-18 12:01:43.322003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.646 [2024-11-18 12:01:43.322020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.646 [2024-11-18 12:01:43.322038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.646 [2024-11-18 12:01:43.322055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.646 [2024-11-18 12:01:43.322072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.646 [2024-11-18 12:01:43.322089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.646 [2024-11-18 12:01:43.322107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.646 [2024-11-18 12:01:43.322125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.646 [2024-11-18 12:01:43.322142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.646 [2024-11-18 12:01:43.322161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.646 [2024-11-18 12:01:43.322178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.646 [2024-11-18 12:01:43.322195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.646 [2024-11-18 12:01:43.322213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.646 [2024-11-18 12:01:43.322231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.646 [2024-11-18 12:01:43.322249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.646 [2024-11-18 12:01:43.322266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.646 [2024-11-18 12:01:43.322303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.646 [2024-11-18 12:01:43.322320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.646 [2024-11-18 12:01:43.322337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.646 [2024-11-18 12:01:43.322354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.646 [2024-11-18 12:01:43.322371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:17.646 12:01:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:33:20.934 12:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:20.934 00:33:21.192 12:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:21.451 12:01:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:33:24.739 12:01:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:24.739 [2024-11-18 12:01:50.385198] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:24.739 12:01:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:33:25.674 12:01:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:25.933 [2024-11-18 12:01:51.687898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:25.933 [2024-11-18 12:01:51.687981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:25.933 [2024-11-18 12:01:51.688001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:25.933 [2024-11-18 12:01:51.688034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:25.933 [2024-11-18 12:01:51.688057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:25.933 [2024-11-18 12:01:51.688090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:25.933 [2024-11-18 12:01:51.688115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:25.933 [2024-11-18 12:01:51.688133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:25.933 [2024-11-18 12:01:51.688150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:25.933 [2024-11-18 12:01:51.688167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:25.933 [2024-11-18 12:01:51.688183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:25.933 [2024-11-18 12:01:51.688201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:25.933 [2024-11-18 12:01:51.688219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:25.933 [2024-11-18 12:01:51.688247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:25.933 [2024-11-18 12:01:51.688265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:25.933 12:01:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3094913 00:33:32.502 { 00:33:32.502 "results": [ 00:33:32.502 { 00:33:32.502 "job": "NVMe0n1", 00:33:32.502 "core_mask": "0x1", 00:33:32.502 "workload": "verify", 00:33:32.502 "status": "finished", 00:33:32.502 "verify_range": { 00:33:32.502 "start": 0, 00:33:32.502 "length": 16384 00:33:32.502 }, 00:33:32.502 "queue_depth": 128, 00:33:32.502 "io_size": 4096, 00:33:32.502 "runtime": 15.01084, 00:33:32.502 "iops": 6080.472511864759, 00:33:32.502 "mibps": 23.751845749471716, 00:33:32.502 "io_failed": 8044, 00:33:32.502 "io_timeout": 0, 00:33:32.502 "avg_latency_us": 19307.25492864412, 00:33:32.502 "min_latency_us": 1098.3348148148148, 00:33:32.502 "max_latency_us": 21068.61037037037 00:33:32.502 } 00:33:32.502 ], 00:33:32.502 "core_count": 1 00:33:32.502 } 00:33:32.502 12:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3094653 00:33:32.502 12:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3094653 ']' 00:33:32.502 12:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3094653 00:33:32.502 12:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:32.502 12:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:32.502 12:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3094653 00:33:32.502 12:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:32.502 12:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:32.502 12:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3094653' 00:33:32.502 killing process with pid 3094653 00:33:32.502 12:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3094653 00:33:32.502 12:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3094653 00:33:32.502 12:01:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:32.502 [2024-11-18 12:01:40.202142] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:33:32.502 [2024-11-18 12:01:40.202303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3094653 ] 00:33:32.502 [2024-11-18 12:01:40.351468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.502 [2024-11-18 12:01:40.481549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:32.502 Running I/O for 15 seconds... 00:33:32.502 6170.00 IOPS, 24.10 MiB/s [2024-11-18T11:01:58.387Z] [2024-11-18 12:01:43.323314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:57040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.502 [2024-11-18 12:01:43.323380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.502 [2024-11-18 12:01:43.323425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:57048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.502 [2024-11-18 12:01:43.323453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.502 [2024-11-18 12:01:43.323480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:57056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.502 [2024-11-18 12:01:43.323514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.502 [2024-11-18 12:01:43.323540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:57064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.502 [2024-11-18 12:01:43.323563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.502 [2024-11-18 12:01:43.323587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:57072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.502 [2024-11-18 12:01:43.323610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.502 [2024-11-18 12:01:43.323635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:57080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.502 [2024-11-18 12:01:43.323658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.502 [2024-11-18 12:01:43.323683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:57088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.502 [2024-11-18 12:01:43.323707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.502 [2024-11-18 12:01:43.323732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:56576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.502 [2024-11-18 12:01:43.323757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.502 [2024-11-18 12:01:43.323781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:56584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.502 [2024-11-18 12:01:43.323820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.502 [2024-11-18 12:01:43.323847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:56592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.502 [2024-11-18 12:01:43.323871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.502 [2024-11-18 12:01:43.323896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:57096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.502 [2024-11-18 12:01:43.323919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.502 [2024-11-18 12:01:43.323951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:57104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.502 [2024-11-18 12:01:43.323974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.502 [2024-11-18 12:01:43.324031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:57112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.502 [2024-11-18 12:01:43.324054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.502 [2024-11-18 12:01:43.324076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:57120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.502 [2024-11-18 12:01:43.324097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.502 [2024-11-18 12:01:43.324120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:57128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.502 [2024-11-18 12:01:43.324142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.502 [2024-11-18 12:01:43.324164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:57136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.502 [2024-11-18 12:01:43.324185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.502 [2024-11-18 12:01:43.324207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.502 [2024-11-18 12:01:43.324228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.502 [2024-11-18 12:01:43.324252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:57152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.502 [2024-11-18 12:01:43.324273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.502 [2024-11-18 12:01:43.324296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:57160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.502 [2024-11-18 12:01:43.324317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.502 [2024-11-18 12:01:43.324339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:57168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.502 [2024-11-18 12:01:43.324360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.502 [2024-11-18 12:01:43.324383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:57176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.502 [2024-11-18 12:01:43.324404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.502 [2024-11-18 12:01:43.324427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.502 [2024-11-18 12:01:43.324447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.502 [2024-11-18 12:01:43.324485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:57192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.324517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.324543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:57200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.324570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.324596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:57208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.324619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.324643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:57216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.324665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.324690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.324712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.324737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:57232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.324759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.324784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.324822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.324845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:57248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.324867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.324890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:57256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.324912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.324935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:57264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.324956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.324979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:57272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.325001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.325025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:57280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.325046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.325070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:57288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.325092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.325116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:57296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.325138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.325166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:57304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.325189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.325212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:57312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.325234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.325258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.325279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.325303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.325324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.325348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:57336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.325369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.325393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:57344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.325414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.325437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:57352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.325459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.325507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:57360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.325532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.325556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:57368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.325578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.325601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:57376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.325625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.325650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:57384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.325672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.325696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:57392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.325719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.325743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.325765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.325809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:57408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.325832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.325856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:57416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.325879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.325903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:57424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.325925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.325949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:57432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.325970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.325993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:57440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.326015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.326039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.326061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.326084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:57456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.326105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.326129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.326151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.326174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:57472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.326196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.326219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:57480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.326241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.326264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:57488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.326286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.326310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:57496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.503 [2024-11-18 12:01:43.326332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.503 [2024-11-18 12:01:43.326356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:57504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.504 [2024-11-18 12:01:43.326382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.326406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:57512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.504 [2024-11-18 12:01:43.326428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.326452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:57520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.504 [2024-11-18 12:01:43.326474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.326521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:57528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.504 [2024-11-18 12:01:43.326546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.326571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.504 [2024-11-18 12:01:43.326594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.326618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.504 [2024-11-18 12:01:43.326641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.326665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:57552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.504 [2024-11-18 12:01:43.326687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.326712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.504 [2024-11-18 12:01:43.326734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.326759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:57568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.504 [2024-11-18 12:01:43.326782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.326821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:57576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.504 [2024-11-18 12:01:43.326843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.326866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:57584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.504 [2024-11-18 12:01:43.326888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.326911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:56600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.504 [2024-11-18 12:01:43.326933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.326956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:56608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.504 [2024-11-18 12:01:43.326977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.327009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:56616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.504 [2024-11-18 12:01:43.327032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.327056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:56624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.504 [2024-11-18 12:01:43.327077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.327115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:56632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.504 [2024-11-18 12:01:43.327138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.327161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:56640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.504 [2024-11-18 12:01:43.327183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.327208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:56648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.504 [2024-11-18 12:01:43.327229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.327253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:56656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.504 [2024-11-18 12:01:43.327275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.327298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:56664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.504 [2024-11-18 12:01:43.327320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.327343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:56672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.504 [2024-11-18 12:01:43.327364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.327389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:56680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.504 [2024-11-18 12:01:43.327411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.327434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:56688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.504 [2024-11-18 12:01:43.327456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.327480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:56696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.504 [2024-11-18 12:01:43.327528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.327555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:56704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.504 [2024-11-18 12:01:43.327578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.327602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:56712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.504 [2024-11-18 12:01:43.327630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.327655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:56720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.504 [2024-11-18 12:01:43.327678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.327703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:56728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.504 [2024-11-18 12:01:43.327725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.327749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:56736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.504 [2024-11-18 12:01:43.327771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.327810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:56744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.504 [2024-11-18 12:01:43.327833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.327857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:56752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.504 [2024-11-18 12:01:43.327878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.327902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.504 [2024-11-18 12:01:43.327924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.327947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:56768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.504 [2024-11-18 12:01:43.327969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.327993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:56776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.504 [2024-11-18 12:01:43.328014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.328038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:57592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.504 [2024-11-18 12:01:43.328059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.328084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:56784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.504 [2024-11-18 12:01:43.328106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.328129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:56792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.504 [2024-11-18 12:01:43.328151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.328176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:56800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.504 [2024-11-18 12:01:43.328198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.328222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:56808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.504 [2024-11-18 12:01:43.328248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.504 [2024-11-18 12:01:43.328273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:56816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.504 [2024-11-18 12:01:43.328295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.505 [2024-11-18 12:01:43.328319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:56824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.505 [2024-11-18 12:01:43.328340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.505 [2024-11-18 12:01:43.328364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:56832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.505 [2024-11-18 12:01:43.328386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.505 [2024-11-18 12:01:43.328411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:56840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.505 [2024-11-18 12:01:43.328433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.505 [2024-11-18 12:01:43.328457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:56848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.505 [2024-11-18 12:01:43.328479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.505 [2024-11-18 12:01:43.328528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:56856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.505 [2024-11-18 12:01:43.328553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.505 [2024-11-18 12:01:43.328578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:56864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.505 [2024-11-18 12:01:43.328600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.505 [2024-11-18 12:01:43.328625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:56872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.505 [2024-11-18 12:01:43.328647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.505 [2024-11-18 12:01:43.328673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:56880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.505 [2024-11-18 12:01:43.328696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.505 [2024-11-18 12:01:43.328720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:56888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.505 [2024-11-18 12:01:43.328743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.505 [2024-11-18 12:01:43.328767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:56896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.505 [2024-11-18 12:01:43.328791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.505 [2024-11-18 12:01:43.328841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:56904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.505 [2024-11-18 12:01:43.328863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.505 [2024-11-18 12:01:43.328892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:56912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.505 [2024-11-18 12:01:43.328914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.505 [2024-11-18 12:01:43.328938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:56920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.505 [2024-11-18 12:01:43.328960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.505 [2024-11-18 12:01:43.328983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:56928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.505 [2024-11-18 12:01:43.329005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.505 [2024-11-18 12:01:43.329028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:56936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.505 [2024-11-18 12:01:43.329049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.505 [2024-11-18 12:01:43.329074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:56944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.505 [2024-11-18 12:01:43.329095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.505 [2024-11-18 12:01:43.329118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:56952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.505 [2024-11-18 12:01:43.329140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.505 [2024-11-18 12:01:43.329164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:56960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.505 [2024-11-18 12:01:43.329186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.505 [2024-11-18 12:01:43.329209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:56968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.505 [2024-11-18 12:01:43.329231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.505 [2024-11-18 12:01:43.329255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:56976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.505 [2024-11-18 12:01:43.329277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.505 [2024-11-18 12:01:43.329300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:56984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.505 [2024-11-18 12:01:43.329322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.505 [2024-11-18 12:01:43.329346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:56992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.505 [2024-11-18 12:01:43.329367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.505 [2024-11-18 12:01:43.329390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:57000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.505 [2024-11-18 12:01:43.329412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.505 [2024-11-18 12:01:43.329436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:57008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.505 [2024-11-18 12:01:43.329462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.505 [2024-11-18 12:01:43.329512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:57016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.505 [2024-11-18 12:01:43.329535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.505 [2024-11-18 12:01:43.329559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:57024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.505 [2024-11-18 12:01:43.329582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.505 [2024-11-18 12:01:43.329610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:33:32.505 [2024-11-18 12:01:43.329639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:32.505 [2024-11-18 12:01:43.329659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:32.505 [2024-11-18 12:01:43.329678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57032 len:8 PRP1 0x0 PRP2 0x0 00:33:32.505 [2024-11-18 12:01:43.329700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.505 [2024-11-18 12:01:43.330000] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:32.505 [2024-11-18 12:01:43.330082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:32.505 [2024-11-18 12:01:43.330111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.505 [2024-11-18 12:01:43.330136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:32.505 [2024-11-18 12:01:43.330157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.505 [2024-11-18 12:01:43.330179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:32.505 [2024-11-18 12:01:43.330199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.505 [2024-11-18 12:01:43.330220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:32.505 [2024-11-18 12:01:43.330240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.505 [2024-11-18 12:01:43.330260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:33:32.505 [2024-11-18 12:01:43.334125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:33:32.505 [2024-11-18 12:01:43.334186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:33:32.505 [2024-11-18 12:01:43.368300] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:33:32.505 5997.50 IOPS, 23.43 MiB/s [2024-11-18T11:01:58.390Z] 5990.67 IOPS, 23.40 MiB/s [2024-11-18T11:01:58.390Z] 6028.50 IOPS, 23.55 MiB/s [2024-11-18T11:01:58.390Z] [2024-11-18 12:01:47.094946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:116416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.505 [2024-11-18 12:01:47.095046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.505 [2024-11-18 12:01:47.095122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:116424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.505 [2024-11-18 12:01:47.095179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.505 [2024-11-18 12:01:47.095207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:116432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.505 [2024-11-18 12:01:47.095244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.505 [2024-11-18 12:01:47.095268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:116440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.505 [2024-11-18 12:01:47.095290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.505 [2024-11-18 12:01:47.095313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:116448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.506 [2024-11-18 12:01:47.095335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.506 [2024-11-18 12:01:47.095358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:116456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.506 [2024-11-18 12:01:47.095380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.506 [2024-11-18 12:01:47.095406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:116464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.506 [2024-11-18 12:01:47.095428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.506 [2024-11-18 12:01:47.095450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:116472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.506 [2024-11-18 12:01:47.095472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.506 [2024-11-18 12:01:47.095519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:116480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.506 [2024-11-18 12:01:47.095544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.506 [2024-11-18 12:01:47.095569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:116488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.506 [2024-11-18 12:01:47.095592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.506 [2024-11-18 12:01:47.095617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:116496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.506 [2024-11-18 12:01:47.095639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.506 [2024-11-18 12:01:47.095663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:116504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.506 [2024-11-18 12:01:47.095686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.506 [2024-11-18 12:01:47.095709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:116512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.506 [2024-11-18 12:01:47.095730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.506 [2024-11-18 12:01:47.095753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:115520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.506 [2024-11-18 12:01:47.095775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.506 [2024-11-18 12:01:47.095819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:115528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.506 [2024-11-18 12:01:47.095842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.506 [2024-11-18 12:01:47.095864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:115536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.506 [2024-11-18 12:01:47.095885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.506 [2024-11-18 12:01:47.095907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:115544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.506 [2024-11-18 12:01:47.095928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.506 [2024-11-18 12:01:47.095952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:115552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.506 [2024-11-18 12:01:47.095973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.506 [2024-11-18 12:01:47.095995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:115560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.506 [2024-11-18 12:01:47.096016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.506 [2024-11-18 12:01:47.096039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:115568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.506 [2024-11-18 12:01:47.096059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.506 [2024-11-18 12:01:47.096082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:115576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.506 [2024-11-18 12:01:47.096103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.506 [2024-11-18 12:01:47.096125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:115584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.506 [2024-11-18 12:01:47.096145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.506 [2024-11-18 12:01:47.096167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:115592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.506 [2024-11-18 12:01:47.096188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.506 [2024-11-18 12:01:47.096211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:115600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.506 [2024-11-18 12:01:47.096231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.506 [2024-11-18 12:01:47.096254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:115608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.506 [2024-11-18 12:01:47.096276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.506 [2024-11-18 12:01:47.096298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.506 [2024-11-18 12:01:47.096319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.506 [2024-11-18 12:01:47.096341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:115624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.506 [2024-11-18 12:01:47.096370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.506 [2024-11-18 12:01:47.096395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:115632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.506 [2024-11-18 12:01:47.096416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.506 [2024-11-18 12:01:47.096438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:116520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.506 [2024-11-18 12:01:47.096459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.506 [2024-11-18 12:01:47.096482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:115640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.506 [2024-11-18 12:01:47.096527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.506 [2024-11-18 12:01:47.096554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:115648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.506 [2024-11-18 12:01:47.096576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.506 [2024-11-18 12:01:47.096599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:115656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.506 [2024-11-18 12:01:47.096621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.506 [2024-11-18 12:01:47.096644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:115664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.506 [2024-11-18 12:01:47.096665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.506 [2024-11-18 12:01:47.096689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:115672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.506 [2024-11-18 12:01:47.096711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.506 [2024-11-18 12:01:47.096734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:115680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.506 [2024-11-18 12:01:47.096755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.506 [2024-11-18 12:01:47.096779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:115688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.506 [2024-11-18 12:01:47.096815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.506 [2024-11-18 12:01:47.096840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.506 [2024-11-18 12:01:47.096861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.506 [2024-11-18 12:01:47.096883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:115704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.096904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.096927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:115712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.096948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.096976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:115720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.096997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.097020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:115728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.097042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.097065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:115736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.097087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.097109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:115744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.097130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.097152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:115752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.097173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.097195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:115760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.097216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.097239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:115768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.097260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.097282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:115776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.097303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.097326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:115784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.097347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.097370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:115792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.097391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.097414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:115800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.097436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.097458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:115808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.097502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.097529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:115816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.097571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.097597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:115824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.097620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.097643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:115832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.097666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.097690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.097712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.097737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:115848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.097760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.097785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:115856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.097823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.097847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.097882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.097906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:115872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.097927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.097950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:115880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.097970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.097993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:115888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.098013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.098036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:115896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.098057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.098079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:115904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.098100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.098123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:115912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.098143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.098167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:115920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.098192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.098230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:115928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.098253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.098276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:115936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.098297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.098321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:115944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.098341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.098364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:115952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.098384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.098407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:115960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.098429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.098451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:115968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.098471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.098516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:115976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.098541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.098566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:115984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.098588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.098611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:115992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.098633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.098656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:116000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.098678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.098701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:116008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.507 [2024-11-18 12:01:47.098722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.507 [2024-11-18 12:01:47.098745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:116016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.098766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.098809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:116024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.098832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.098856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:116032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.098877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.098899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:116040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.098921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.098944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:116048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.098964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.098988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:116056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.099010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.099033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:116064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.099054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.099077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:116072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.099098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.099121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:116080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.099143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.099165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:116088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.099186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.099208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:116096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.099229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.099252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:116104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.099273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.099295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.099316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.099339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:116120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.099364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.099388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:116128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.099409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.099431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:116136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.099452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.099474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:116144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.099520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.099560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:116152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.099583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.099608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:116160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.099630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.099655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:116168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.099677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.099701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.099724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.099748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:116184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.099772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.099797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:116192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.099819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.099859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:116200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.099882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.099906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:116208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.099928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.099952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:116216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.099973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.100001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:116224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.100024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.100048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.100069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.100093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:116240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.100115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.100138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:116248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.100160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.100184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:116256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.100205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.100229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:116264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.100251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.100274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:116272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.100296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.100320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:116280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.100340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.100364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.508 [2024-11-18 12:01:47.100387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.100410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:116536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.508 [2024-11-18 12:01:47.100432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.100456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:116288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.100501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.100530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:116296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.100553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.100577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:116304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.100605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.100630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:116312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.508 [2024-11-18 12:01:47.100652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.508 [2024-11-18 12:01:47.100677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:116320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.509 [2024-11-18 12:01:47.100700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.509 [2024-11-18 12:01:47.100724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:116328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.509 [2024-11-18 12:01:47.100746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.509 [2024-11-18 12:01:47.100770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:116336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.509 [2024-11-18 12:01:47.100793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.509 [2024-11-18 12:01:47.100833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:116344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.509 [2024-11-18 12:01:47.100855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.509 [2024-11-18 12:01:47.100878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.509 [2024-11-18 12:01:47.100899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.509 [2024-11-18 12:01:47.100923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:116360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.509 [2024-11-18 12:01:47.100945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.509 [2024-11-18 12:01:47.100968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:116368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.509 [2024-11-18 12:01:47.100989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.509 [2024-11-18 12:01:47.101013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:116376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.509 [2024-11-18 12:01:47.101034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.509 [2024-11-18 12:01:47.101058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:116384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.509 [2024-11-18 12:01:47.101079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.509 [2024-11-18 12:01:47.101103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:116392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.509 [2024-11-18 12:01:47.101125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.509 [2024-11-18 12:01:47.101148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:116400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.509 [2024-11-18 12:01:47.101170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.509 [2024-11-18 12:01:47.101197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3180 is same with the state(6) to be set 00:33:32.509 [2024-11-18 12:01:47.101224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:32.509 [2024-11-18 12:01:47.101243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:32.509 [2024-11-18 12:01:47.101261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116408 len:8 PRP1 0x0 PRP2 0x0 00:33:32.509 [2024-11-18 12:01:47.101295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.509 [2024-11-18 12:01:47.101596] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:33:32.509 [2024-11-18 12:01:47.101655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:32.509 [2024-11-18 12:01:47.101683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.509 [2024-11-18 12:01:47.101708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:32.509 [2024-11-18 12:01:47.101729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.509 [2024-11-18 12:01:47.101750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:32.509 [2024-11-18 12:01:47.101771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.509 [2024-11-18 12:01:47.101793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:32.509 [2024-11-18 12:01:47.101813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.509 [2024-11-18 12:01:47.101834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:33:32.509 [2024-11-18 12:01:47.101916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:33:32.509 [2024-11-18 12:01:47.105767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:33:32.509 5905.80 IOPS, 23.07 MiB/s [2024-11-18T11:01:58.394Z] [2024-11-18 12:01:47.225267] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:33:32.509 5918.33 IOPS, 23.12 MiB/s [2024-11-18T11:01:58.394Z] 5972.29 IOPS, 23.33 MiB/s [2024-11-18T11:01:58.394Z] 6004.00 IOPS, 23.45 MiB/s [2024-11-18T11:01:58.394Z] 6019.11 IOPS, 23.51 MiB/s [2024-11-18T11:01:58.394Z] [2024-11-18 12:01:51.691123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:107808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.509 [2024-11-18 12:01:51.691185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.509 [2024-11-18 12:01:51.691237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.509 [2024-11-18 12:01:51.691261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.509 [2024-11-18 12:01:51.691287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.509 [2024-11-18 12:01:51.691308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.509 [2024-11-18 12:01:51.691331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.509 [2024-11-18 12:01:51.691353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.509 [2024-11-18 12:01:51.691382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.509 [2024-11-18 12:01:51.691405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.509 [2024-11-18 12:01:51.691428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.509 [2024-11-18 12:01:51.691448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.509 [2024-11-18 12:01:51.691487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.509 [2024-11-18 12:01:51.691520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.509 [2024-11-18 12:01:51.691560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.509 [2024-11-18 12:01:51.691584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.509 [2024-11-18 12:01:51.691609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.509 [2024-11-18 12:01:51.691634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.509 [2024-11-18 12:01:51.691658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.509 [2024-11-18 12:01:51.691681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.509 [2024-11-18 12:01:51.691705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.509 [2024-11-18 12:01:51.691726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.509 [2024-11-18 12:01:51.691750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.509 [2024-11-18 12:01:51.691772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.509 [2024-11-18 12:01:51.691810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.509 [2024-11-18 12:01:51.691832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.509 [2024-11-18 12:01:51.691870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.509 [2024-11-18 12:01:51.691892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.509 [2024-11-18 12:01:51.691916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.509 [2024-11-18 12:01:51.691937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.509 [2024-11-18 12:01:51.691960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.509 [2024-11-18 12:01:51.691981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.509 [2024-11-18 12:01:51.692004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.509 [2024-11-18 12:01:51.692029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.509 [2024-11-18 12:01:51.692053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.509 [2024-11-18 12:01:51.692074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.509 [2024-11-18 12:01:51.692096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.509 [2024-11-18 12:01:51.692116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.509 [2024-11-18 12:01:51.692139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.692160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.692182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.692203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.692226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.692247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.692270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.692290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.692313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.692334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.692356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.692377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.692400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.692421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.692443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.692464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.692486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.692531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.692557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.692578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.692606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.692628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.692651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.692673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.692696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.692718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.692741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.692762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.692786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.692822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.692846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.692867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.692889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.692910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.692933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.692953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.692976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.692997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.693019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.693053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.693079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.693100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.693124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:108248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.693145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.693168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.693193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.693218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.693239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.693262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:108272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.693284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.693307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.693329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.693353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:108288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.693374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.693397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:108296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.693420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.693443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:108304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.693464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.693488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:108312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.693536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.693562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:108320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.693584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.693608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.693630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.693654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:108336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.693676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.693700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:108344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.693722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.693746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:108352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.693768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.693816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:108360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.693840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.693865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:108368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.693887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.693910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:108376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.693932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.693973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:108384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.693996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.510 [2024-11-18 12:01:51.694020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:108392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.510 [2024-11-18 12:01:51.694041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.511 [2024-11-18 12:01:51.694064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:108400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.511 [2024-11-18 12:01:51.694086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.511 [2024-11-18 12:01:51.694109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:108408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.511 [2024-11-18 12:01:51.694130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.511 [2024-11-18 12:01:51.694153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:108416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.511 [2024-11-18 12:01:51.694175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.511 [2024-11-18 12:01:51.694197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:108424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.511 [2024-11-18 12:01:51.694218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.511 [2024-11-18 12:01:51.694242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.511 [2024-11-18 12:01:51.694263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.511 [2024-11-18 12:01:51.694286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:108440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.511 [2024-11-18 12:01:51.694307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.511 [2024-11-18 12:01:51.694331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:107816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.511 [2024-11-18 12:01:51.694352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.511 [2024-11-18 12:01:51.694376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:107824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.511 [2024-11-18 12:01:51.694398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.511 [2024-11-18 12:01:51.694426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.511 [2024-11-18 12:01:51.694448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.511 [2024-11-18 12:01:51.694471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:107840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.511 [2024-11-18 12:01:51.694499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.511 [2024-11-18 12:01:51.694554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:107848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.511 [2024-11-18 12:01:51.694577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.511 [2024-11-18 12:01:51.694621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:107856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.511 [2024-11-18 12:01:51.694644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.511 [2024-11-18 12:01:51.694668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.511 [2024-11-18 12:01:51.694690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.511 [2024-11-18 12:01:51.694714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:108448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.511 [2024-11-18 12:01:51.694736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.511 [2024-11-18 12:01:51.694760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:108456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.511 [2024-11-18 12:01:51.694783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.511 [2024-11-18 12:01:51.694822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:108464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.511 [2024-11-18 12:01:51.694855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.511 [2024-11-18 12:01:51.694879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:108472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.511 [2024-11-18 12:01:51.694900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.511 [2024-11-18 12:01:51.694923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.511 [2024-11-18 12:01:51.694945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.511 [2024-11-18 12:01:51.694968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:108488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.511 [2024-11-18 12:01:51.694989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.511 [2024-11-18 12:01:51.695012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.511 [2024-11-18 12:01:51.695036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.511 [2024-11-18 12:01:51.695059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:108504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.511 [2024-11-18 12:01:51.695085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.511 [2024-11-18 12:01:51.695109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:108512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.511 [2024-11-18 12:01:51.695131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.511 [2024-11-18 12:01:51.695154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:108520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.511 [2024-11-18 12:01:51.695176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.511 [2024-11-18 12:01:51.695200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:108528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.511 [2024-11-18 12:01:51.695221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.511 [2024-11-18 12:01:51.695245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.511 [2024-11-18 12:01:51.695266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.511 [2024-11-18 12:01:51.695290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.511 [2024-11-18 12:01:51.695312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.511 [2024-11-18 12:01:51.695335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:108552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.511 [2024-11-18 12:01:51.695357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.511 [2024-11-18 12:01:51.695380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:108560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.511 [2024-11-18 12:01:51.695403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.511 [2024-11-18 12:01:51.695426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:108568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.511 [2024-11-18 12:01:51.695447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.511 [2024-11-18 12:01:51.695486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:108576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.511 [2024-11-18 12:01:51.695518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.511 [2024-11-18 12:01:51.695543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:108584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.511 [2024-11-18 12:01:51.695565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.511 [2024-11-18 12:01:51.695589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.511 [2024-11-18 12:01:51.695611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.511 [2024-11-18 12:01:51.695635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:108600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.511 [2024-11-18 12:01:51.695657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.511 [2024-11-18 12:01:51.695687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.511 [2024-11-18 12:01:51.695710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.511 [2024-11-18 12:01:51.695734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:108616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.511 [2024-11-18 12:01:51.695756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.512 [2024-11-18 12:01:51.695780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:108624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.512 [2024-11-18 12:01:51.695818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.512 [2024-11-18 12:01:51.695844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.512 [2024-11-18 12:01:51.695865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.512 [2024-11-18 12:01:51.695889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:108640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.512 [2024-11-18 12:01:51.695911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.512 [2024-11-18 12:01:51.695936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:108648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.512 [2024-11-18 12:01:51.695958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.512 [2024-11-18 12:01:51.695982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:108656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.512 [2024-11-18 12:01:51.696003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.512 [2024-11-18 12:01:51.696026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:108664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.512 [2024-11-18 12:01:51.696048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.512 [2024-11-18 12:01:51.696095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:32.512 [2024-11-18 12:01:51.696122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108672 len:8 PRP1 0x0 PRP2 0x0 00:33:32.512 [2024-11-18 12:01:51.696143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.512 [2024-11-18 12:01:51.696169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:32.512 [2024-11-18 12:01:51.696189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:32.512 [2024-11-18 12:01:51.696207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108680 len:8 PRP1 0x0 PRP2 0x0 00:33:32.512 [2024-11-18 12:01:51.696226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.512 [2024-11-18 12:01:51.696247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:32.512 [2024-11-18 12:01:51.696264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:32.512 [2024-11-18 12:01:51.696281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108688 len:8 PRP1 0x0 PRP2 0x0 00:33:32.512 [2024-11-18 12:01:51.696300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.512 [2024-11-18 12:01:51.696332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:32.512 [2024-11-18 12:01:51.696350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:32.512 [2024-11-18 12:01:51.696368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108696 len:8 PRP1 0x0 PRP2 0x0 00:33:32.512 [2024-11-18 12:01:51.696388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.512 [2024-11-18 12:01:51.696407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:32.512 [2024-11-18 12:01:51.696423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:32.512 [2024-11-18 12:01:51.696440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107872 len:8 PRP1 0x0 PRP2 0x0 00:33:32.512 [2024-11-18 12:01:51.696459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.512 [2024-11-18 12:01:51.696479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:32.512 [2024-11-18 12:01:51.696520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:32.512 [2024-11-18 12:01:51.696541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107880 len:8 PRP1 0x0 PRP2 0x0 00:33:32.512 [2024-11-18 12:01:51.696561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.512 [2024-11-18 12:01:51.696582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:32.512 [2024-11-18 12:01:51.696600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:32.512 [2024-11-18 12:01:51.696617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107888 len:8 PRP1 0x0 PRP2 0x0 00:33:32.512 [2024-11-18 12:01:51.696637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.512 [2024-11-18 12:01:51.696656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:32.512 [2024-11-18 12:01:51.696673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:32.512 [2024-11-18 12:01:51.696691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107896 len:8 PRP1 0x0 PRP2 0x0 00:33:32.512 [2024-11-18 12:01:51.696710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.512 [2024-11-18 12:01:51.696729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:32.512 [2024-11-18 12:01:51.696747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:32.512 [2024-11-18 12:01:51.696764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107904 len:8 PRP1 0x0 PRP2 0x0 00:33:32.512 [2024-11-18 12:01:51.696783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.512 [2024-11-18 12:01:51.696819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:32.512 [2024-11-18 12:01:51.696836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:32.512 [2024-11-18 12:01:51.696853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107912 len:8 PRP1 0x0 PRP2 0x0 00:33:32.512 [2024-11-18 12:01:51.696871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.512 [2024-11-18 12:01:51.696891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:32.512 [2024-11-18 12:01:51.696907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:32.512 [2024-11-18 12:01:51.696924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107920 len:8 PRP1 0x0 PRP2 0x0 00:33:32.512 [2024-11-18 12:01:51.696960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.512 [2024-11-18 12:01:51.696981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:32.512 [2024-11-18 12:01:51.696998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:32.512 [2024-11-18 12:01:51.697016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107928 len:8 PRP1 0x0 PRP2 0x0 00:33:32.512 [2024-11-18 12:01:51.697034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.512 [2024-11-18 12:01:51.697053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:32.512 [2024-11-18 12:01:51.697069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:32.512 [2024-11-18 12:01:51.697086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108704 len:8 PRP1 0x0 PRP2 0x0 00:33:32.512 [2024-11-18 12:01:51.697105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.512 [2024-11-18 12:01:51.697124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:32.512 [2024-11-18 12:01:51.697141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:32.512 [2024-11-18 12:01:51.697158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108712 len:8 PRP1 0x0 PRP2 0x0 00:33:32.512 [2024-11-18 12:01:51.697178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.512 [2024-11-18 12:01:51.697197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:32.512 [2024-11-18 12:01:51.697214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:32.512 [2024-11-18 12:01:51.697233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108720 len:8 PRP1 0x0 PRP2 0x0 00:33:32.512 [2024-11-18 12:01:51.697251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.512 [2024-11-18 12:01:51.697270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:32.512 [2024-11-18 12:01:51.697287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:32.512 [2024-11-18 12:01:51.697304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108728 len:8 PRP1 0x0 PRP2 0x0 00:33:32.512 [2024-11-18 12:01:51.697322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.512 [2024-11-18 12:01:51.697342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:32.512 [2024-11-18 12:01:51.697358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:32.512 [2024-11-18 12:01:51.697375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108736 len:8 PRP1 0x0 PRP2 0x0 00:33:32.512 [2024-11-18 12:01:51.697393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.512 [2024-11-18 12:01:51.697413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:32.512 [2024-11-18 12:01:51.697429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:32.512 [2024-11-18 12:01:51.697446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108744 len:8 PRP1 0x0 PRP2 0x0 00:33:32.512 [2024-11-18 12:01:51.697465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.512 [2024-11-18 12:01:51.697484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:32.512 [2024-11-18 12:01:51.697527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:32.512 [2024-11-18 12:01:51.697550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108752 len:8 PRP1 0x0 PRP2 0x0 00:33:32.512 [2024-11-18 12:01:51.697571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.512 [2024-11-18 12:01:51.697592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:32.512 [2024-11-18 12:01:51.697608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:32.512 [2024-11-18 12:01:51.697627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108760 len:8 PRP1 0x0 PRP2 0x0 00:33:32.513 [2024-11-18 12:01:51.697646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.513 [2024-11-18 12:01:51.697666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:32.513 [2024-11-18 12:01:51.697683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:32.513 [2024-11-18 12:01:51.697700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108768 len:8 PRP1 0x0 PRP2 0x0 00:33:32.513 [2024-11-18 12:01:51.697719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.513 [2024-11-18 12:01:51.697739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:32.513 [2024-11-18 12:01:51.697757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:32.513 [2024-11-18 12:01:51.697782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108776 len:8 PRP1 0x0 PRP2 0x0 00:33:32.513 [2024-11-18 12:01:51.697817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.513 [2024-11-18 12:01:51.697839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:32.513 [2024-11-18 12:01:51.697856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:32.513 [2024-11-18 12:01:51.697873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108784 len:8 PRP1 0x0 PRP2 0x0 00:33:32.513 [2024-11-18 12:01:51.697891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.513 [2024-11-18 12:01:51.697910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:32.513 [2024-11-18 12:01:51.697927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:32.513 [2024-11-18 12:01:51.697944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108792 len:8 PRP1 0x0 PRP2 0x0 00:33:32.513 [2024-11-18 12:01:51.697963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.513 [2024-11-18 12:01:51.697982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:32.513 [2024-11-18 12:01:51.697998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:32.513 [2024-11-18 12:01:51.698015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108800 len:8 PRP1 0x0 PRP2 0x0 00:33:32.513 [2024-11-18 12:01:51.698034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.513 [2024-11-18 12:01:51.698053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:32.513 [2024-11-18 12:01:51.698070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:32.513 [2024-11-18 12:01:51.698086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108808 len:8 PRP1 0x0 PRP2 0x0 00:33:32.513 [2024-11-18 12:01:51.698105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.513 [2024-11-18 12:01:51.698129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:32.513 [2024-11-18 12:01:51.698147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:32.513 [2024-11-18 12:01:51.698164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108816 len:8 PRP1 0x0 PRP2 0x0 00:33:32.513 [2024-11-18 12:01:51.698183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.513 [2024-11-18 12:01:51.698202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:32.513 [2024-11-18 12:01:51.698219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:32.513 [2024-11-18 12:01:51.698236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108824 len:8 PRP1 0x0 PRP2 0x0 00:33:32.513 [2024-11-18 12:01:51.698255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.513 [2024-11-18 12:01:51.698551] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:33:32.513 [2024-11-18 12:01:51.698614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:32.513 [2024-11-18 12:01:51.698641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.513 [2024-11-18 12:01:51.698666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:32.513 [2024-11-18 12:01:51.698687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.513 [2024-11-18 12:01:51.698708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:32.513 [2024-11-18 12:01:51.698735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.513 [2024-11-18 12:01:51.698758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:32.513 [2024-11-18 12:01:51.698778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.513 [2024-11-18 12:01:51.698798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:33:32.513 [2024-11-18 12:01:51.698866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:33:32.513 [2024-11-18 12:01:51.702671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:33:32.513 [2024-11-18 12:01:51.777247] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:33:32.513 5988.50 IOPS, 23.39 MiB/s [2024-11-18T11:01:58.398Z] 6012.91 IOPS, 23.49 MiB/s [2024-11-18T11:01:58.398Z] 6036.42 IOPS, 23.58 MiB/s [2024-11-18T11:01:58.398Z] 6049.00 IOPS, 23.63 MiB/s [2024-11-18T11:01:58.398Z] 6063.36 IOPS, 23.68 MiB/s [2024-11-18T11:01:58.398Z] 6081.13 IOPS, 23.75 MiB/s 00:33:32.513 Latency(us) 00:33:32.513 [2024-11-18T11:01:58.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:32.513 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:32.513 Verification LBA range: start 0x0 length 0x4000 00:33:32.513 NVMe0n1 : 15.01 6080.47 23.75 535.88 0.00 19307.25 1098.33 21068.61 00:33:32.513 [2024-11-18T11:01:58.398Z] =================================================================================================================== 00:33:32.513 [2024-11-18T11:01:58.398Z] Total : 6080.47 23.75 535.88 0.00 19307.25 1098.33 21068.61 00:33:32.513 Received shutdown signal, test time was about 15.000000 seconds 00:33:32.513 00:33:32.513 Latency(us) 00:33:32.513 [2024-11-18T11:01:58.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:32.513 [2024-11-18T11:01:58.398Z] =================================================================================================================== 00:33:32.513 [2024-11-18T11:01:58.398Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:32.513 12:01:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:33:32.513 12:01:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:33:32.513 12:01:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:33:32.513 12:01:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3096753 00:33:32.513 12:01:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:33:32.513 12:01:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3096753 /var/tmp/bdevperf.sock 00:33:32.513 12:01:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3096753 ']' 00:33:32.513 12:01:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:32.513 12:01:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:32.513 12:01:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:32.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:32.513 12:01:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:32.513 12:01:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:33.461 12:01:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:33.461 12:01:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:33.461 12:01:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:33.724 [2024-11-18 12:01:59.447824] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:33.724 12:01:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:33.982 [2024-11-18 12:01:59.704653] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:33.982 12:01:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:34.240 NVMe0n1 00:33:34.240 12:02:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:34.807 00:33:34.807 12:02:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:35.419 00:33:35.419 12:02:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:35.419 12:02:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:33:35.678 12:02:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:35.937 12:02:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:33:39.224 12:02:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:39.224 12:02:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:33:39.224 12:02:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:39.224 12:02:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3097558 00:33:39.224 12:02:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3097558 00:33:40.161 { 00:33:40.161 "results": [ 00:33:40.161 { 00:33:40.161 "job": "NVMe0n1", 00:33:40.161 "core_mask": "0x1", 00:33:40.161 "workload": "verify", 00:33:40.161 "status": "finished", 00:33:40.161 "verify_range": { 00:33:40.161 "start": 0, 00:33:40.161 "length": 16384 00:33:40.161 }, 00:33:40.161 "queue_depth": 128, 00:33:40.161 "io_size": 4096, 00:33:40.161 "runtime": 1.009279, 00:33:40.161 "iops": 6146.9623364798035, 00:33:40.161 "mibps": 24.011571626874233, 00:33:40.161 "io_failed": 0, 00:33:40.161 "io_timeout": 0, 00:33:40.161 "avg_latency_us": 20736.168967213507, 00:33:40.161 "min_latency_us": 4538.974814814815, 00:33:40.161 "max_latency_us": 19515.164444444443 00:33:40.161 } 00:33:40.161 ], 00:33:40.161 "core_count": 1 00:33:40.161 } 00:33:40.161 12:02:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:40.161 [2024-11-18 12:01:58.207523] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:33:40.161 [2024-11-18 12:01:58.207683] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3096753 ] 00:33:40.161 [2024-11-18 12:01:58.354232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:40.161 [2024-11-18 12:01:58.480574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:40.161 [2024-11-18 12:02:01.583928] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:40.161 [2024-11-18 12:02:01.584071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:40.161 [2024-11-18 12:02:01.584126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:40.161 [2024-11-18 12:02:01.584156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:40.161 [2024-11-18 12:02:01.584178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:40.161 [2024-11-18 12:02:01.584201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:40.161 [2024-11-18 12:02:01.584222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:40.161 [2024-11-18 12:02:01.584245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:40.161 [2024-11-18 12:02:01.584266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:40.161 [2024-11-18 12:02:01.584287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:33:40.161 [2024-11-18 12:02:01.584372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:33:40.161 [2024-11-18 12:02:01.584431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:33:40.161 [2024-11-18 12:02:01.727713] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:33:40.161 Running I/O for 1 seconds... 00:33:40.161 6076.00 IOPS, 23.73 MiB/s 00:33:40.161 Latency(us) 00:33:40.161 [2024-11-18T11:02:06.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:40.161 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:40.161 Verification LBA range: start 0x0 length 0x4000 00:33:40.161 NVMe0n1 : 1.01 6146.96 24.01 0.00 0.00 20736.17 4538.97 19515.16 00:33:40.161 [2024-11-18T11:02:06.046Z] =================================================================================================================== 00:33:40.161 [2024-11-18T11:02:06.046Z] Total : 6146.96 24.01 0.00 0.00 20736.17 4538.97 19515.16 00:33:40.161 12:02:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:40.161 12:02:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:33:40.727 12:02:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:40.985 12:02:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:40.985 12:02:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:33:41.243 12:02:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:41.499 12:02:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:33:44.788 12:02:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:44.788 12:02:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:33:44.788 12:02:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3096753 00:33:44.788 12:02:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3096753 ']' 00:33:44.788 12:02:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3096753 00:33:44.788 12:02:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:44.788 12:02:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:44.788 12:02:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3096753 00:33:44.788 12:02:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:44.788 12:02:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:44.788 12:02:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3096753' 00:33:44.789 killing process with pid 3096753 00:33:44.789 12:02:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3096753 00:33:44.789 12:02:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3096753 00:33:45.722 12:02:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:33:45.722 12:02:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:45.980 12:02:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:33:45.980 12:02:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:45.980 12:02:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:33:45.980 12:02:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:45.980 12:02:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:33:45.980 12:02:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:45.980 12:02:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:33:45.980 12:02:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:45.980 12:02:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:45.980 rmmod nvme_tcp 00:33:45.980 rmmod nvme_fabrics 00:33:45.980 rmmod nvme_keyring 00:33:45.980 12:02:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:45.980 12:02:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:33:45.980 12:02:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:33:45.980 12:02:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3094341 ']' 00:33:45.980 12:02:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3094341 00:33:45.980 12:02:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3094341 ']' 00:33:45.980 12:02:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3094341 00:33:45.980 12:02:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:45.980 12:02:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:45.980 12:02:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3094341 00:33:45.980 12:02:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:45.980 12:02:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:45.980 12:02:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3094341' 00:33:45.980 killing process with pid 3094341 00:33:45.980 12:02:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3094341 00:33:45.980 12:02:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3094341 00:33:47.360 12:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:47.360 12:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:47.360 12:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:47.360 12:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:33:47.360 12:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:33:47.360 12:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:47.360 12:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:33:47.360 12:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:47.360 12:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:47.360 12:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:47.360 12:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:47.360 12:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:49.264 12:02:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:49.264 00:33:49.264 real 0m40.333s 00:33:49.264 user 2m22.367s 00:33:49.264 sys 0m6.306s 00:33:49.264 12:02:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:49.264 12:02:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:49.264 ************************************ 00:33:49.264 END TEST nvmf_failover 00:33:49.264 ************************************ 00:33:49.264 12:02:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:49.264 12:02:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:49.264 12:02:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:49.264 12:02:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.264 ************************************ 00:33:49.264 START TEST nvmf_host_discovery 00:33:49.264 ************************************ 00:33:49.264 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:49.264 * Looking for test storage... 00:33:49.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:49.264 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:49.264 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:33:49.264 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:49.522 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:49.522 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:49.522 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:49.522 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:49.522 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:33:49.522 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:33:49.522 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:33:49.522 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:33:49.522 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:33:49.522 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:33:49.522 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:33:49.522 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:49.522 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:33:49.522 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:33:49.522 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:49.522 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:49.522 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:33:49.522 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:33:49.522 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:49.522 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:33:49.522 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:33:49.522 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:33:49.522 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:33:49.522 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:49.522 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:33:49.522 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:33:49.522 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:49.522 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:49.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:49.523 --rc genhtml_branch_coverage=1 00:33:49.523 --rc genhtml_function_coverage=1 00:33:49.523 --rc genhtml_legend=1 00:33:49.523 --rc geninfo_all_blocks=1 00:33:49.523 --rc geninfo_unexecuted_blocks=1 00:33:49.523 00:33:49.523 ' 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:49.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:49.523 --rc genhtml_branch_coverage=1 00:33:49.523 --rc genhtml_function_coverage=1 00:33:49.523 --rc genhtml_legend=1 00:33:49.523 --rc geninfo_all_blocks=1 00:33:49.523 --rc geninfo_unexecuted_blocks=1 00:33:49.523 00:33:49.523 ' 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:49.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:49.523 --rc genhtml_branch_coverage=1 00:33:49.523 --rc genhtml_function_coverage=1 00:33:49.523 --rc genhtml_legend=1 00:33:49.523 --rc geninfo_all_blocks=1 00:33:49.523 --rc geninfo_unexecuted_blocks=1 00:33:49.523 00:33:49.523 ' 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:49.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:49.523 --rc genhtml_branch_coverage=1 00:33:49.523 --rc genhtml_function_coverage=1 00:33:49.523 --rc genhtml_legend=1 00:33:49.523 --rc geninfo_all_blocks=1 00:33:49.523 --rc geninfo_unexecuted_blocks=1 00:33:49.523 00:33:49.523 ' 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:49.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:33:49.523 12:02:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.425 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:51.425 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:33:51.425 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:51.425 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:51.426 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:51.426 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:51.426 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:51.426 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:51.426 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:51.685 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:51.685 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:51.685 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:51.685 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:51.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:51.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:33:51.685 00:33:51.685 --- 10.0.0.2 ping statistics --- 00:33:51.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:51.685 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:33:51.685 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:51.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:51.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:33:51.685 00:33:51.685 --- 10.0.0.1 ping statistics --- 00:33:51.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:51.685 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:33:51.685 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:51.685 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:33:51.685 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:51.685 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:51.685 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:51.685 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:51.685 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:51.685 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:51.685 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:51.685 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:33:51.685 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:51.685 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:51.685 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.685 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3100438 00:33:51.685 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:51.685 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3100438 00:33:51.685 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3100438 ']' 00:33:51.685 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:51.685 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:51.685 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:51.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:51.685 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:51.685 12:02:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.685 [2024-11-18 12:02:17.452856] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:33:51.685 [2024-11-18 12:02:17.452995] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:51.943 [2024-11-18 12:02:17.598559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:51.943 [2024-11-18 12:02:17.727849] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:51.943 [2024-11-18 12:02:17.727941] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:51.943 [2024-11-18 12:02:17.727981] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:51.943 [2024-11-18 12:02:17.728019] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:51.943 [2024-11-18 12:02:17.728051] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:51.943 [2024-11-18 12:02:17.729800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:52.877 12:02:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:52.877 12:02:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:33:52.877 12:02:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:52.877 12:02:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:52.877 12:02:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.877 12:02:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:52.877 12:02:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:52.877 12:02:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.877 12:02:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.877 [2024-11-18 12:02:18.445974] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:52.877 12:02:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.877 12:02:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:33:52.877 12:02:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.877 12:02:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.877 [2024-11-18 12:02:18.454179] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:52.877 12:02:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.877 12:02:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:33:52.877 12:02:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.877 12:02:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.877 null0 00:33:52.877 12:02:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.877 12:02:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:33:52.877 12:02:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.877 12:02:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.877 null1 00:33:52.877 12:02:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.877 12:02:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:33:52.877 12:02:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.877 12:02:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.877 12:02:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.877 12:02:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3100588 00:33:52.877 12:02:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:33:52.877 12:02:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3100588 /tmp/host.sock 00:33:52.877 12:02:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3100588 ']' 00:33:52.877 12:02:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:33:52.877 12:02:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:52.877 12:02:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:52.877 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:52.877 12:02:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:52.877 12:02:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.877 [2024-11-18 12:02:18.571723] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:33:52.877 [2024-11-18 12:02:18.571886] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3100588 ] 00:33:52.877 [2024-11-18 12:02:18.720360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:53.137 [2024-11-18 12:02:18.856923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:53.703 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:53.703 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:33:53.703 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:53.703 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:33:53.703 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.703 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.703 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.703 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:33:53.703 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.703 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.703 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.703 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:33:53.703 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:33:53.703 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:53.703 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.703 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.703 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:53.703 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:53.703 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:53.703 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.962 [2024-11-18 12:02:19.818113] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:53.962 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:33:54.221 12:02:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:33:54.789 [2024-11-18 12:02:20.607680] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:54.789 [2024-11-18 12:02:20.607726] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:54.789 [2024-11-18 12:02:20.607766] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:55.047 [2024-11-18 12:02:20.694085] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:55.047 [2024-11-18 12:02:20.795306] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:33:55.047 [2024-11-18 12:02:20.796952] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x6150001f2a00:1 started. 00:33:55.047 [2024-11-18 12:02:20.799470] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:55.047 [2024-11-18 12:02:20.799514] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:55.047 [2024-11-18 12:02:20.805916] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6150001f2a00 was disconnected and freed. delete nvme_qpair. 00:33:55.306 12:02:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:55.306 12:02:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:55.306 12:02:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:55.306 12:02:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:55.306 12:02:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.306 12:02:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:55.306 12:02:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:55.306 12:02:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:55.306 12:02:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:55.306 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:55.565 [2024-11-18 12:02:21.333389] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x6150001f2c80:1 started. 00:33:55.565 [2024-11-18 12:02:21.339549] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6150001f2c80 was disconnected and freed. delete nvme_qpair. 00:33:55.565 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.565 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:55.565 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:55.565 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:33:55.565 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:55.565 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:55.565 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:55.565 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:55.565 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:55.565 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:55.565 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:55.565 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:33:55.565 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:55.565 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.565 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:55.565 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.565 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:55.565 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:55.565 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:55.565 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:55.565 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:33:55.565 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.565 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:55.565 [2024-11-18 12:02:21.407918] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:55.565 [2024-11-18 12:02:21.408964] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:55.565 [2024-11-18 12:02:21.409021] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:55.565 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.565 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:55.565 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:55.565 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:55.565 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:55.565 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:55.565 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:55.565 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:55.565 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:55.565 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.565 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:55.565 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:55.565 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:55.565 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.824 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.824 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:55.824 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:55.824 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:55.824 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:55.824 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:55.824 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:55.824 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:55.824 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:55.824 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:55.824 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.824 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:55.824 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:55.824 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:55.824 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.824 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:55.824 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:55.824 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:55.824 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:55.824 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:55.824 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:55.825 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:55.825 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:55.825 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:55.825 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.825 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:55.825 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:55.825 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:55.825 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:55.825 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.825 [2024-11-18 12:02:21.535683] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:33:55.825 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:33:55.825 12:02:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:33:56.085 [2024-11-18 12:02:21.844861] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:33:56.085 [2024-11-18 12:02:21.844987] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:56.085 [2024-11-18 12:02:21.845015] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:56.085 [2024-11-18 12:02:21.845031] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:57.024 [2024-11-18 12:02:22.640326] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:57.024 [2024-11-18 12:02:22.640406] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:57.024 [2024-11-18 12:02:22.649998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:57.024 [2024-11-18 12:02:22.650060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.024 [2024-11-18 12:02:22.650087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:57.024 [2024-11-18 12:02:22.650109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.024 [2024-11-18 12:02:22.650130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:57.024 [2024-11-18 12:02:22.650151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.024 [2024-11-18 12:02:22.650177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:57.024 [2024-11-18 12:02:22.650214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.024 [2024-11-18 12:02:22.650236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:57.024 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.024 [2024-11-18 12:02:22.659982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:57.024 [2024-11-18 12:02:22.670020] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:57.024 [2024-11-18 12:02:22.670068] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:57.024 [2024-11-18 12:02:22.670089] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:57.024 [2024-11-18 12:02:22.670106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:57.024 [2024-11-18 12:02:22.670179] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:57.024 [2024-11-18 12:02:22.670396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.024 [2024-11-18 12:02:22.670440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:57.024 [2024-11-18 12:02:22.670468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:57.024 [2024-11-18 12:02:22.670533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:57.024 [2024-11-18 12:02:22.670585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:57.024 [2024-11-18 12:02:22.670611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:57.024 [2024-11-18 12:02:22.670642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:57.024 [2024-11-18 12:02:22.670662] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:57.024 [2024-11-18 12:02:22.670678] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:57.024 [2024-11-18 12:02:22.670692] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:57.024 [2024-11-18 12:02:22.680222] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:57.024 [2024-11-18 12:02:22.680258] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:57.024 [2024-11-18 12:02:22.680276] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:57.025 [2024-11-18 12:02:22.680290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:57.025 [2024-11-18 12:02:22.680331] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:57.025 [2024-11-18 12:02:22.680487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.025 [2024-11-18 12:02:22.680549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:57.025 [2024-11-18 12:02:22.680572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:57.025 [2024-11-18 12:02:22.680611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:57.025 [2024-11-18 12:02:22.680671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:57.025 [2024-11-18 12:02:22.680697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:57.025 [2024-11-18 12:02:22.680717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:57.025 [2024-11-18 12:02:22.680736] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:57.025 [2024-11-18 12:02:22.680751] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:57.025 [2024-11-18 12:02:22.680779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:57.025 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.025 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:57.025 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:57.025 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:57.025 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:57.025 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:57.025 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:57.025 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:57.025 [2024-11-18 12:02:22.690385] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:57.025 [2024-11-18 12:02:22.690426] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:57.025 [2024-11-18 12:02:22.690445] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:57.025 [2024-11-18 12:02:22.690459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:57.025 [2024-11-18 12:02:22.690510] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:57.025 [2024-11-18 12:02:22.690670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.025 [2024-11-18 12:02:22.690707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:57.025 [2024-11-18 12:02:22.690731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:57.025 [2024-11-18 12:02:22.690789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:57.025 [2024-11-18 12:02:22.690850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:57.025 [2024-11-18 12:02:22.690878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:57.025 [2024-11-18 12:02:22.690899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:57.025 [2024-11-18 12:02:22.690920] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:57.025 [2024-11-18 12:02:22.690937] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:57.025 [2024-11-18 12:02:22.690951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:57.025 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:57.025 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:57.025 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.025 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:57.025 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:57.025 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:57.025 [2024-11-18 12:02:22.700564] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:57.025 [2024-11-18 12:02:22.700599] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:57.025 [2024-11-18 12:02:22.700616] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:57.025 [2024-11-18 12:02:22.700628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:57.025 [2024-11-18 12:02:22.700666] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:57.025 [2024-11-18 12:02:22.700840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.025 [2024-11-18 12:02:22.700882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:57.025 [2024-11-18 12:02:22.700908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:57.025 [2024-11-18 12:02:22.700971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:57.025 [2024-11-18 12:02:22.701008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:57.025 [2024-11-18 12:02:22.701032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:57.025 [2024-11-18 12:02:22.701053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:57.025 [2024-11-18 12:02:22.701073] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:57.025 [2024-11-18 12:02:22.701089] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:57.025 [2024-11-18 12:02:22.701102] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:57.025 [2024-11-18 12:02:22.710706] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:57.025 [2024-11-18 12:02:22.710737] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:57.025 [2024-11-18 12:02:22.710752] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:57.025 [2024-11-18 12:02:22.710764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:57.025 [2024-11-18 12:02:22.710812] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:57.025 [2024-11-18 12:02:22.711015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.025 [2024-11-18 12:02:22.711051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:57.025 [2024-11-18 12:02:22.711087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:57.025 [2024-11-18 12:02:22.711121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:57.025 [2024-11-18 12:02:22.711151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:57.025 [2024-11-18 12:02:22.711180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:57.025 [2024-11-18 12:02:22.711200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:57.025 [2024-11-18 12:02:22.711233] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:57.025 [2024-11-18 12:02:22.711247] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:57.025 [2024-11-18 12:02:22.711259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:57.025 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.025 [2024-11-18 12:02:22.720854] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:57.025 [2024-11-18 12:02:22.720891] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:57.025 [2024-11-18 12:02:22.720909] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:57.025 [2024-11-18 12:02:22.720924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:57.025 [2024-11-18 12:02:22.720976] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:57.025 [2024-11-18 12:02:22.721162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.025 [2024-11-18 12:02:22.721202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:57.025 [2024-11-18 12:02:22.721228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:57.025 [2024-11-18 12:02:22.721264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:57.025 [2024-11-18 12:02:22.721297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:57.025 [2024-11-18 12:02:22.721320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:57.025 [2024-11-18 12:02:22.721341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:57.025 [2024-11-18 12:02:22.721361] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:57.025 [2024-11-18 12:02:22.721377] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:57.025 [2024-11-18 12:02:22.721391] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:57.025 [2024-11-18 12:02:22.727513] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:33:57.025 [2024-11-18 12:02:22.727571] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:57.025 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:57.025 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:57.026 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.285 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:33:57.285 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:57.285 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:33:57.285 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:33:57.285 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:57.285 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:57.285 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:57.285 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:57.285 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:57.285 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:57.285 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:57.285 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:57.285 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.285 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:57.285 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.285 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:33:57.285 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:33:57.285 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:57.285 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:57.285 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:57.285 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.285 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:58.225 [2024-11-18 12:02:24.007252] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:58.225 [2024-11-18 12:02:24.007299] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:58.225 [2024-11-18 12:02:24.007344] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:58.225 [2024-11-18 12:02:24.094646] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:33:58.483 [2024-11-18 12:02:24.158790] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:33:58.483 [2024-11-18 12:02:24.160233] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x6150001f3e00:1 started. 00:33:58.483 [2024-11-18 12:02:24.162922] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:58.483 [2024-11-18 12:02:24.162979] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:58.483 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.483 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:58.483 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:58.483 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:58.483 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:58.483 [2024-11-18 12:02:24.165527] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x6150001f3e00 was disconnected and freed. delete nvme_qpair. 00:33:58.483 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:58.483 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:58.483 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:58.483 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:58.483 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.483 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:58.483 request: 00:33:58.483 { 00:33:58.483 "name": "nvme", 00:33:58.483 "trtype": "tcp", 00:33:58.483 "traddr": "10.0.0.2", 00:33:58.483 "adrfam": "ipv4", 00:33:58.483 "trsvcid": "8009", 00:33:58.483 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:58.483 "wait_for_attach": true, 00:33:58.483 "method": "bdev_nvme_start_discovery", 00:33:58.483 "req_id": 1 00:33:58.483 } 00:33:58.483 Got JSON-RPC error response 00:33:58.483 response: 00:33:58.483 { 00:33:58.483 "code": -17, 00:33:58.483 "message": "File exists" 00:33:58.483 } 00:33:58.483 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:58.484 request: 00:33:58.484 { 00:33:58.484 "name": "nvme_second", 00:33:58.484 "trtype": "tcp", 00:33:58.484 "traddr": "10.0.0.2", 00:33:58.484 "adrfam": "ipv4", 00:33:58.484 "trsvcid": "8009", 00:33:58.484 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:58.484 "wait_for_attach": true, 00:33:58.484 "method": "bdev_nvme_start_discovery", 00:33:58.484 "req_id": 1 00:33:58.484 } 00:33:58.484 Got JSON-RPC error response 00:33:58.484 response: 00:33:58.484 { 00:33:58.484 "code": -17, 00:33:58.484 "message": "File exists" 00:33:58.484 } 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:58.484 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.744 12:02:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:59.682 [2024-11-18 12:02:25.374655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.683 [2024-11-18 12:02:25.374738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4080 with addr=10.0.0.2, port=8010 00:33:59.683 [2024-11-18 12:02:25.374834] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:59.683 [2024-11-18 12:02:25.374861] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:59.683 [2024-11-18 12:02:25.374884] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:00.620 [2024-11-18 12:02:26.377176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.620 [2024-11-18 12:02:26.377253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=8010 00:34:00.620 [2024-11-18 12:02:26.377325] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:00.620 [2024-11-18 12:02:26.377348] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:00.620 [2024-11-18 12:02:26.377368] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:01.554 [2024-11-18 12:02:27.379176] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:34:01.554 request: 00:34:01.554 { 00:34:01.554 "name": "nvme_second", 00:34:01.554 "trtype": "tcp", 00:34:01.554 "traddr": "10.0.0.2", 00:34:01.554 "adrfam": "ipv4", 00:34:01.554 "trsvcid": "8010", 00:34:01.554 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:01.554 "wait_for_attach": false, 00:34:01.554 "attach_timeout_ms": 3000, 00:34:01.554 "method": "bdev_nvme_start_discovery", 00:34:01.554 "req_id": 1 00:34:01.554 } 00:34:01.554 Got JSON-RPC error response 00:34:01.554 response: 00:34:01.554 { 00:34:01.554 "code": -110, 00:34:01.554 "message": "Connection timed out" 00:34:01.554 } 00:34:01.554 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:01.554 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:34:01.554 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:01.554 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:01.554 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:01.554 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:34:01.554 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:01.554 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:01.554 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.554 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:01.554 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:01.554 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:01.554 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.554 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:34:01.554 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:34:01.554 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3100588 00:34:01.554 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:34:01.554 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:01.554 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:34:01.554 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:01.554 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:34:01.554 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:01.554 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:01.554 rmmod nvme_tcp 00:34:01.813 rmmod nvme_fabrics 00:34:01.813 rmmod nvme_keyring 00:34:01.813 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:01.813 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:34:01.813 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:34:01.813 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3100438 ']' 00:34:01.813 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3100438 00:34:01.813 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 3100438 ']' 00:34:01.813 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 3100438 00:34:01.813 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:34:01.813 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:01.813 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3100438 00:34:01.813 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:01.813 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:01.813 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3100438' 00:34:01.813 killing process with pid 3100438 00:34:01.813 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 3100438 00:34:01.813 12:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 3100438 00:34:03.203 12:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:03.203 12:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:03.203 12:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:03.203 12:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:34:03.203 12:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:34:03.203 12:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:03.203 12:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:34:03.203 12:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:03.203 12:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:03.203 12:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:03.203 12:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:03.203 12:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:05.112 00:34:05.112 real 0m15.621s 00:34:05.112 user 0m23.056s 00:34:05.112 sys 0m3.206s 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:05.112 ************************************ 00:34:05.112 END TEST nvmf_host_discovery 00:34:05.112 ************************************ 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.112 ************************************ 00:34:05.112 START TEST nvmf_host_multipath_status 00:34:05.112 ************************************ 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:05.112 * Looking for test storage... 00:34:05.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:05.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.112 --rc genhtml_branch_coverage=1 00:34:05.112 --rc genhtml_function_coverage=1 00:34:05.112 --rc genhtml_legend=1 00:34:05.112 --rc geninfo_all_blocks=1 00:34:05.112 --rc geninfo_unexecuted_blocks=1 00:34:05.112 00:34:05.112 ' 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:05.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.112 --rc genhtml_branch_coverage=1 00:34:05.112 --rc genhtml_function_coverage=1 00:34:05.112 --rc genhtml_legend=1 00:34:05.112 --rc geninfo_all_blocks=1 00:34:05.112 --rc geninfo_unexecuted_blocks=1 00:34:05.112 00:34:05.112 ' 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:05.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.112 --rc genhtml_branch_coverage=1 00:34:05.112 --rc genhtml_function_coverage=1 00:34:05.112 --rc genhtml_legend=1 00:34:05.112 --rc geninfo_all_blocks=1 00:34:05.112 --rc geninfo_unexecuted_blocks=1 00:34:05.112 00:34:05.112 ' 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:05.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.112 --rc genhtml_branch_coverage=1 00:34:05.112 --rc genhtml_function_coverage=1 00:34:05.112 --rc genhtml_legend=1 00:34:05.112 --rc geninfo_all_blocks=1 00:34:05.112 --rc geninfo_unexecuted_blocks=1 00:34:05.112 00:34:05.112 ' 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:05.112 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.113 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.113 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.113 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:34:05.113 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.113 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:34:05.113 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:05.113 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:05.113 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:05.113 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:05.113 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:05.113 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:05.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:05.113 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:05.113 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:05.113 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:05.113 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:05.113 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:05.113 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:05.113 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:34:05.113 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:05.113 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:05.113 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:34:05.113 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:05.113 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:05.113 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:05.113 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:05.113 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:05.113 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:05.113 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:05.113 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:05.113 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:05.113 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:05.113 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:34:05.113 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:07.018 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:07.018 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:34:07.018 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:07.018 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:07.018 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:07.018 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:07.018 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:07.018 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:34:07.018 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:07.018 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:34:07.018 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:34:07.018 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:34:07.018 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:34:07.018 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:34:07.018 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:34:07.018 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:07.018 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:07.018 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:07.018 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:07.018 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:07.018 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:07.018 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:07.018 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:07.019 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:07.019 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:07.019 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:07.019 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:07.019 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:07.278 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:07.278 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:07.278 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:07.278 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:07.278 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:07.278 12:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:07.278 12:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:07.278 12:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:07.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:07.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:34:07.278 00:34:07.278 --- 10.0.0.2 ping statistics --- 00:34:07.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:07.278 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:34:07.278 12:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:07.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:07.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:34:07.278 00:34:07.278 --- 10.0.0.1 ping statistics --- 00:34:07.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:07.278 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:34:07.278 12:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:07.278 12:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:34:07.278 12:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:07.278 12:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:07.278 12:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:07.278 12:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:07.278 12:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:07.278 12:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:07.278 12:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:07.278 12:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:34:07.278 12:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:07.278 12:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:07.278 12:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:07.278 12:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3103881 00:34:07.278 12:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:07.278 12:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3103881 00:34:07.278 12:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3103881 ']' 00:34:07.278 12:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:07.279 12:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:07.279 12:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:07.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:07.279 12:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:07.279 12:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:07.279 [2024-11-18 12:02:33.132122] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:34:07.279 [2024-11-18 12:02:33.132269] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:07.539 [2024-11-18 12:02:33.276226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:07.539 [2024-11-18 12:02:33.396245] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:07.539 [2024-11-18 12:02:33.396340] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:07.539 [2024-11-18 12:02:33.396362] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:07.539 [2024-11-18 12:02:33.396382] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:07.539 [2024-11-18 12:02:33.396398] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:07.539 [2024-11-18 12:02:33.398788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:07.539 [2024-11-18 12:02:33.398791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:08.474 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:08.474 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:34:08.474 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:08.474 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:08.474 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:08.474 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:08.474 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3103881 00:34:08.474 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:08.732 [2024-11-18 12:02:34.458588] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:08.732 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:08.991 Malloc0 00:34:09.250 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:34:09.542 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:09.828 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:10.086 [2024-11-18 12:02:35.773270] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:10.086 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:10.344 [2024-11-18 12:02:36.066135] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:10.344 12:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3104299 00:34:10.344 12:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:34:10.344 12:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:10.344 12:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3104299 /var/tmp/bdevperf.sock 00:34:10.344 12:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3104299 ']' 00:34:10.344 12:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:10.344 12:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:10.344 12:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:10.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:10.344 12:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:10.344 12:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:11.282 12:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:11.282 12:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:34:11.282 12:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:11.540 12:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:12.107 Nvme0n1 00:34:12.107 12:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:12.678 Nvme0n1 00:34:12.678 12:02:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:34:12.678 12:02:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:34:14.581 12:02:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:34:14.581 12:02:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:14.839 12:02:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:15.098 12:02:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:34:16.034 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:34:16.034 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:16.034 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:16.034 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:16.293 12:02:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:16.293 12:02:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:16.293 12:02:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:16.293 12:02:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:16.862 12:02:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:16.862 12:02:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:16.862 12:02:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:16.862 12:02:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:16.862 12:02:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:16.862 12:02:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:16.862 12:02:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:16.862 12:02:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:17.121 12:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:17.121 12:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:17.121 12:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:17.121 12:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:17.688 12:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:17.688 12:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:17.688 12:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:17.688 12:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:17.688 12:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:17.688 12:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:34:17.688 12:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:18.257 12:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:18.257 12:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:34:19.637 12:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:34:19.637 12:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:19.637 12:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:19.637 12:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:19.637 12:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:19.637 12:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:19.637 12:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:19.637 12:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:19.895 12:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:19.895 12:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:19.895 12:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:19.895 12:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:20.153 12:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:20.153 12:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:20.153 12:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:20.153 12:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:20.411 12:02:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:20.411 12:02:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:20.411 12:02:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:20.411 12:02:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:20.669 12:02:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:20.669 12:02:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:20.669 12:02:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:20.669 12:02:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:20.927 12:02:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:20.927 12:02:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:34:20.927 12:02:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:21.495 12:02:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:21.495 12:02:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:34:22.874 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:34:22.874 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:22.874 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:22.874 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:22.874 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:22.874 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:22.874 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:22.874 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:23.174 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:23.174 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:23.174 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:23.174 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:23.432 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:23.433 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:23.433 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:23.433 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:23.691 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:23.691 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:23.691 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:23.691 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:23.949 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:23.949 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:23.949 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:23.949 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:24.207 12:02:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:24.207 12:02:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:34:24.207 12:02:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:24.465 12:02:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:25.034 12:02:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:34:25.973 12:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:34:25.973 12:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:25.973 12:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:25.973 12:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:26.231 12:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:26.231 12:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:26.231 12:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:26.231 12:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:26.489 12:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:26.489 12:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:26.489 12:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:26.489 12:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:26.748 12:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:26.748 12:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:26.748 12:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:26.748 12:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:27.006 12:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:27.006 12:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:27.007 12:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:27.007 12:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:27.265 12:02:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:27.265 12:02:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:27.265 12:02:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:27.265 12:02:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:27.523 12:02:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:27.523 12:02:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:34:27.523 12:02:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:27.781 12:02:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:28.040 12:02:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:34:28.980 12:02:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:34:28.980 12:02:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:28.980 12:02:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:28.980 12:02:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:29.546 12:02:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:29.546 12:02:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:29.546 12:02:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:29.546 12:02:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:29.546 12:02:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:29.546 12:02:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:29.546 12:02:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:29.546 12:02:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:29.804 12:02:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:29.804 12:02:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:29.804 12:02:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:29.804 12:02:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:30.063 12:02:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:30.063 12:02:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:30.063 12:02:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:30.063 12:02:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:30.630 12:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:30.630 12:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:30.630 12:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:30.630 12:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:30.630 12:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:30.630 12:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:34:30.630 12:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:30.889 12:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:31.148 12:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:34:32.528 12:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:34:32.528 12:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:32.528 12:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:32.528 12:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:32.528 12:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:32.528 12:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:32.528 12:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:32.528 12:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:32.786 12:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:32.786 12:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:32.786 12:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:32.786 12:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:33.044 12:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:33.044 12:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:33.044 12:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:33.044 12:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:33.302 12:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:33.302 12:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:33.302 12:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:33.302 12:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:33.559 12:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:33.559 12:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:33.559 12:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:33.559 12:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:33.817 12:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:33.817 12:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:34:34.075 12:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:34:34.075 12:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:34.642 12:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:34.900 12:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:34:35.834 12:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:34:35.834 12:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:35.834 12:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:35.834 12:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:36.092 12:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:36.092 12:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:36.092 12:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:36.093 12:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:36.351 12:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:36.351 12:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:36.351 12:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:36.351 12:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:36.609 12:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:36.609 12:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:36.609 12:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:36.609 12:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:36.867 12:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:36.867 12:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:36.867 12:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:36.867 12:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:37.434 12:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:37.434 12:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:37.434 12:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:37.434 12:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:37.434 12:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:37.434 12:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:34:37.434 12:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:37.692 12:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:38.261 12:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:34:39.199 12:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:34:39.199 12:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:39.199 12:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:39.199 12:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:39.458 12:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:39.458 12:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:39.458 12:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:39.458 12:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:39.716 12:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:39.717 12:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:39.717 12:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:39.717 12:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:39.975 12:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:39.975 12:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:39.975 12:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:39.975 12:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:40.233 12:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:40.233 12:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:40.233 12:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:40.233 12:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:40.525 12:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:40.525 12:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:40.525 12:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:40.525 12:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:40.809 12:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:40.809 12:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:34:40.809 12:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:41.085 12:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:41.343 12:03:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:34:42.278 12:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:34:42.278 12:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:42.278 12:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:42.278 12:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:42.844 12:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:42.844 12:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:42.845 12:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:42.845 12:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:43.103 12:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:43.103 12:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:43.103 12:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:43.103 12:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:43.378 12:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:43.378 12:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:43.378 12:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:43.378 12:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:43.640 12:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:43.641 12:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:43.641 12:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:43.641 12:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:43.899 12:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:43.899 12:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:43.899 12:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:43.899 12:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:44.157 12:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:44.157 12:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:34:44.157 12:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:44.415 12:03:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:44.675 12:03:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:34:45.609 12:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:34:45.609 12:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:45.609 12:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:45.609 12:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:45.868 12:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:45.868 12:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:45.868 12:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:45.868 12:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:46.126 12:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:46.126 12:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:46.385 12:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:46.385 12:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:46.643 12:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:46.643 12:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:46.644 12:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:46.644 12:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:46.902 12:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:46.902 12:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:46.902 12:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:46.902 12:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:47.160 12:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:47.160 12:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:47.160 12:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.160 12:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:47.433 12:03:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:47.433 12:03:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3104299 00:34:47.433 12:03:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3104299 ']' 00:34:47.433 12:03:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3104299 00:34:47.433 12:03:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:34:47.433 12:03:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:47.433 12:03:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3104299 00:34:47.433 12:03:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:34:47.433 12:03:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:34:47.433 12:03:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3104299' 00:34:47.433 killing process with pid 3104299 00:34:47.433 12:03:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3104299 00:34:47.433 12:03:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3104299 00:34:47.433 { 00:34:47.433 "results": [ 00:34:47.433 { 00:34:47.433 "job": "Nvme0n1", 00:34:47.433 "core_mask": "0x4", 00:34:47.433 "workload": "verify", 00:34:47.433 "status": "terminated", 00:34:47.433 "verify_range": { 00:34:47.433 "start": 0, 00:34:47.433 "length": 16384 00:34:47.433 }, 00:34:47.433 "queue_depth": 128, 00:34:47.433 "io_size": 4096, 00:34:47.433 "runtime": 34.641216, 00:34:47.433 "iops": 5899.792894106257, 00:34:47.433 "mibps": 23.046065992602568, 00:34:47.433 "io_failed": 0, 00:34:47.433 "io_timeout": 0, 00:34:47.433 "avg_latency_us": 21657.461357410957, 00:34:47.433 "min_latency_us": 485.45185185185187, 00:34:47.433 "max_latency_us": 4026531.84 00:34:47.433 } 00:34:47.433 ], 00:34:47.433 "core_count": 1 00:34:47.433 } 00:34:48.373 12:03:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3104299 00:34:48.373 12:03:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:48.373 [2024-11-18 12:02:36.169974] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:34:48.373 [2024-11-18 12:02:36.170135] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3104299 ] 00:34:48.373 [2024-11-18 12:02:36.304660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:48.373 [2024-11-18 12:02:36.427100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:48.373 Running I/O for 90 seconds... 00:34:48.373 6317.00 IOPS, 24.68 MiB/s [2024-11-18T11:03:14.258Z] 6250.00 IOPS, 24.41 MiB/s [2024-11-18T11:03:14.258Z] 6309.33 IOPS, 24.65 MiB/s [2024-11-18T11:03:14.258Z] 6314.25 IOPS, 24.67 MiB/s [2024-11-18T11:03:14.258Z] 6312.80 IOPS, 24.66 MiB/s [2024-11-18T11:03:14.258Z] 6276.00 IOPS, 24.52 MiB/s [2024-11-18T11:03:14.258Z] 6276.00 IOPS, 24.52 MiB/s [2024-11-18T11:03:14.258Z] 6257.12 IOPS, 24.44 MiB/s [2024-11-18T11:03:14.258Z] 6253.78 IOPS, 24.43 MiB/s [2024-11-18T11:03:14.258Z] 6247.70 IOPS, 24.41 MiB/s [2024-11-18T11:03:14.258Z] 6241.82 IOPS, 24.38 MiB/s [2024-11-18T11:03:14.258Z] 6246.17 IOPS, 24.40 MiB/s [2024-11-18T11:03:14.258Z] 6237.69 IOPS, 24.37 MiB/s [2024-11-18T11:03:14.258Z] 6246.79 IOPS, 24.40 MiB/s [2024-11-18T11:03:14.258Z] 6234.73 IOPS, 24.35 MiB/s [2024-11-18T11:03:14.258Z] [2024-11-18 12:02:53.546231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.373 [2024-11-18 12:02:53.546332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:48.373 [2024-11-18 12:02:53.546514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.373 [2024-11-18 12:02:53.546559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:48.373 [2024-11-18 12:02:53.546621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.373 [2024-11-18 12:02:53.546663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:48.373 [2024-11-18 12:02:53.546725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.373 [2024-11-18 12:02:53.546764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:48.373 [2024-11-18 12:02:53.546842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.373 [2024-11-18 12:02:53.546879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:48.373 [2024-11-18 12:02:53.546938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.373 [2024-11-18 12:02:53.546976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:48.373 [2024-11-18 12:02:53.547033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.373 [2024-11-18 12:02:53.547072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:48.373 [2024-11-18 12:02:53.547129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.373 [2024-11-18 12:02:53.547168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:48.373 [2024-11-18 12:02:53.547229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.373 [2024-11-18 12:02:53.547268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:48.373 [2024-11-18 12:02:53.547343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.373 [2024-11-18 12:02:53.547382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:48.373 [2024-11-18 12:02:53.547441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.373 [2024-11-18 12:02:53.547503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:48.373 [2024-11-18 12:02:53.547567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.373 [2024-11-18 12:02:53.547607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:48.373 [2024-11-18 12:02:53.547669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.373 [2024-11-18 12:02:53.547708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:48.373 [2024-11-18 12:02:53.547768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.373 [2024-11-18 12:02:53.547823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.373 [2024-11-18 12:02:53.547882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.373 [2024-11-18 12:02:53.547920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.373 [2024-11-18 12:02:53.547977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.373 [2024-11-18 12:02:53.548015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:48.373 [2024-11-18 12:02:53.548073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.373 [2024-11-18 12:02:53.548112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:48.373 [2024-11-18 12:02:53.548171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.373 [2024-11-18 12:02:53.548210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:48.373 [2024-11-18 12:02:53.548269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.373 [2024-11-18 12:02:53.548307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:48.373 [2024-11-18 12:02:53.548366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.373 [2024-11-18 12:02:53.548404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:48.373 [2024-11-18 12:02:53.548463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.373 [2024-11-18 12:02:53.548528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:48.373 [2024-11-18 12:02:53.548592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.373 [2024-11-18 12:02:53.548638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:48.373 [2024-11-18 12:02:53.548701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.373 [2024-11-18 12:02:53.548741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:48.373 [2024-11-18 12:02:53.548816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.373 [2024-11-18 12:02:53.548855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:48.373 [2024-11-18 12:02:53.548912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.373 [2024-11-18 12:02:53.548950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:48.373 [2024-11-18 12:02:53.549011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.373 [2024-11-18 12:02:53.549048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:48.373 [2024-11-18 12:02:53.549106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.373 [2024-11-18 12:02:53.549144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:48.373 [2024-11-18 12:02:53.549204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.373 [2024-11-18 12:02:53.549244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:48.373 [2024-11-18 12:02:53.550396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.373 [2024-11-18 12:02:53.550443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:48.373 [2024-11-18 12:02:53.550537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.373 [2024-11-18 12:02:53.550581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:48.373 [2024-11-18 12:02:53.550646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.373 [2024-11-18 12:02:53.550688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:48.373 [2024-11-18 12:02:53.550751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.550790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.550867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.550906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.550966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.551011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.551073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.551111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.551173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.551210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.551273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.551311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.551374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.551414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.551498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.551539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.551602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.551642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.551705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.551744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.551821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.551860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.551920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.551958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.552019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.552058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.552117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.552157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.552218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.552272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.552342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.552381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.552446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.552485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.552562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.552602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.552666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.552708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.552770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.552824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.552886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.552923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.552983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.553022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.553083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.553120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.553196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.553234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.553297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.553338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.553399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.553439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.553513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.553554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.553623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.553662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.553725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.553764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.553841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.553877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.553939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.553978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.554039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.554077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.554138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.554192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.554253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.554291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.554351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.554389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.554451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.554512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.554576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.554616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.554678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.554717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:48.374 [2024-11-18 12:02:53.554794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.374 [2024-11-18 12:02:53.554831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.554891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.554935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.554995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.555032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.555095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.555131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.555192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.555230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.555291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.555331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.555586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.555629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.555701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.555741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.555808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.555848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.555928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.555967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.556030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.556071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.556134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.556173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.556238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.556276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.556340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.556383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.556448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.556509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.556576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.556616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.556683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.556721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.556788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.556842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.556906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.556946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.557009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.557047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.557111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.557150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.557213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.557251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.557314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.557352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.557416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.557453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.557541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.557582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.557648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.557687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.557761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.557816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.557880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.557918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.557982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.558021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.558085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.558123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.558187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.558225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.558289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.558329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.558392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.558431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.558518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.558558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.558624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.558664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.558731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.558769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.558848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.558886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.558952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.558989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.559058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.375 [2024-11-18 12:02:53.559095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:48.375 [2024-11-18 12:02:53.559160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.376 [2024-11-18 12:02:53.559199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:48.376 [2024-11-18 12:02:53.559263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.376 [2024-11-18 12:02:53.559302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:48.376 [2024-11-18 12:02:53.559365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.376 [2024-11-18 12:02:53.559403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:48.376 [2024-11-18 12:02:53.559467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.376 [2024-11-18 12:02:53.559531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:48.376 [2024-11-18 12:02:53.559596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.376 [2024-11-18 12:02:53.559635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:48.376 [2024-11-18 12:02:53.559700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.376 [2024-11-18 12:02:53.559738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:48.376 [2024-11-18 12:02:53.559818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.376 [2024-11-18 12:02:53.559855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:48.376 [2024-11-18 12:02:53.559918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.376 [2024-11-18 12:02:53.559955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:48.376 [2024-11-18 12:02:53.560019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.376 [2024-11-18 12:02:53.560059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:48.376 [2024-11-18 12:02:53.560123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.376 [2024-11-18 12:02:53.560161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:48.376 [2024-11-18 12:02:53.560225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.376 [2024-11-18 12:02:53.560264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:48.376 [2024-11-18 12:02:53.560328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.376 [2024-11-18 12:02:53.560377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:48.376 [2024-11-18 12:02:53.560441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.376 [2024-11-18 12:02:53.560505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:48.376 [2024-11-18 12:02:53.560574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.376 [2024-11-18 12:02:53.560615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:48.376 [2024-11-18 12:02:53.560680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.376 [2024-11-18 12:02:53.560721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:48.376 [2024-11-18 12:02:53.560787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.376 [2024-11-18 12:02:53.560839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:48.376 [2024-11-18 12:02:53.560904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.376 [2024-11-18 12:02:53.560942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:48.376 [2024-11-18 12:02:53.561006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.376 [2024-11-18 12:02:53.561043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:48.376 [2024-11-18 12:02:53.561109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.376 [2024-11-18 12:02:53.561145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:48.376 [2024-11-18 12:02:53.561211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.376 [2024-11-18 12:02:53.561263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:48.376 5869.38 IOPS, 22.93 MiB/s [2024-11-18T11:03:14.261Z] 5524.12 IOPS, 21.58 MiB/s [2024-11-18T11:03:14.261Z] 5217.22 IOPS, 20.38 MiB/s [2024-11-18T11:03:14.261Z] 4942.63 IOPS, 19.31 MiB/s [2024-11-18T11:03:14.261Z] 4975.45 IOPS, 19.44 MiB/s [2024-11-18T11:03:14.261Z] 5038.67 IOPS, 19.68 MiB/s [2024-11-18T11:03:14.261Z] 5101.91 IOPS, 19.93 MiB/s [2024-11-18T11:03:14.261Z] 5245.91 IOPS, 20.49 MiB/s [2024-11-18T11:03:14.261Z] 5384.21 IOPS, 21.03 MiB/s [2024-11-18T11:03:14.261Z] 5513.80 IOPS, 21.54 MiB/s [2024-11-18T11:03:14.261Z] 5545.81 IOPS, 21.66 MiB/s [2024-11-18T11:03:14.261Z] 5565.00 IOPS, 21.74 MiB/s [2024-11-18T11:03:14.261Z] 5584.29 IOPS, 21.81 MiB/s [2024-11-18T11:03:14.261Z] 5629.38 IOPS, 21.99 MiB/s [2024-11-18T11:03:14.261Z] 5719.87 IOPS, 22.34 MiB/s [2024-11-18T11:03:14.261Z] 5800.39 IOPS, 22.66 MiB/s [2024-11-18T11:03:14.261Z] [2024-11-18 12:03:10.444431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:56592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.376 [2024-11-18 12:03:10.444559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:48.376 [2024-11-18 12:03:10.444690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:56624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.376 [2024-11-18 12:03:10.444734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:48.376 [2024-11-18 12:03:10.444821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:56656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.376 [2024-11-18 12:03:10.444874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:48.376 [2024-11-18 12:03:10.444934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.376 [2024-11-18 12:03:10.444987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.376 [2024-11-18 12:03:10.445054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.376 [2024-11-18 12:03:10.445107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.376 [2024-11-18 12:03:10.445168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:57240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.376 [2024-11-18 12:03:10.445205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:48.376 [2024-11-18 12:03:10.445262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:57256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.376 [2024-11-18 12:03:10.445299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:48.376 [2024-11-18 12:03:10.445380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:57272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.376 [2024-11-18 12:03:10.445420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:48.376 [2024-11-18 12:03:10.445514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:57288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.376 [2024-11-18 12:03:10.445556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:48.376 [2024-11-18 12:03:10.445632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:57304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.376 [2024-11-18 12:03:10.445672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:48.376 [2024-11-18 12:03:10.445733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:57320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.376 [2024-11-18 12:03:10.445772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:48.376 [2024-11-18 12:03:10.445833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:56680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.376 [2024-11-18 12:03:10.445872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:48.376 [2024-11-18 12:03:10.445934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:56712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.376 [2024-11-18 12:03:10.445974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:48.376 [2024-11-18 12:03:10.446034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:56744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.376 [2024-11-18 12:03:10.446074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:48.376 [2024-11-18 12:03:10.446147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:57336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.376 [2024-11-18 12:03:10.446191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:48.376 [2024-11-18 12:03:10.446251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:57352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.376 [2024-11-18 12:03:10.446289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:48.376 [2024-11-18 12:03:10.446364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:57368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.377 [2024-11-18 12:03:10.446406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:48.377 [2024-11-18 12:03:10.446470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:57384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.377 [2024-11-18 12:03:10.446519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:48.377 [2024-11-18 12:03:10.446580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:57400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.377 [2024-11-18 12:03:10.446619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:48.377 [2024-11-18 12:03:10.446679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:56768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.377 [2024-11-18 12:03:10.446720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:48.377 [2024-11-18 12:03:10.446791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:56808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.377 [2024-11-18 12:03:10.446843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:48.377 [2024-11-18 12:03:10.446901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:56840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.377 [2024-11-18 12:03:10.446940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:48.377 [2024-11-18 12:03:10.447001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:56872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.377 [2024-11-18 12:03:10.447040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:48.377 [2024-11-18 12:03:10.447099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:56904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.377 [2024-11-18 12:03:10.447137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:48.377 [2024-11-18 12:03:10.449585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:56688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.377 [2024-11-18 12:03:10.449628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:48.377 [2024-11-18 12:03:10.449715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:56720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.377 [2024-11-18 12:03:10.449760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:48.377 [2024-11-18 12:03:10.449834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:56752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.377 [2024-11-18 12:03:10.449880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:48.377 [2024-11-18 12:03:10.449941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:57424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.377 [2024-11-18 12:03:10.449980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:48.377 [2024-11-18 12:03:10.450039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:57440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.377 [2024-11-18 12:03:10.450077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:48.377 [2024-11-18 12:03:10.450135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:56776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.377 [2024-11-18 12:03:10.450172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:48.377 [2024-11-18 12:03:10.450232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:56800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.377 [2024-11-18 12:03:10.450269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:48.377 [2024-11-18 12:03:10.450328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:56832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.377 [2024-11-18 12:03:10.450366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:48.377 [2024-11-18 12:03:10.450436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:56864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.377 [2024-11-18 12:03:10.450497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:48.377 [2024-11-18 12:03:10.450562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:56896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.377 [2024-11-18 12:03:10.450602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:48.377 [2024-11-18 12:03:10.450664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:57448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.377 [2024-11-18 12:03:10.450704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:48.377 [2024-11-18 12:03:10.450769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:57464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.377 [2024-11-18 12:03:10.450808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:48.377 [2024-11-18 12:03:10.450870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:57480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.377 [2024-11-18 12:03:10.450911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:48.377 [2024-11-18 12:03:10.450971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:56944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.377 [2024-11-18 12:03:10.451012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:48.377 [2024-11-18 12:03:10.451086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:56976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.377 [2024-11-18 12:03:10.451137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:48.377 [2024-11-18 12:03:10.451206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:57008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.377 [2024-11-18 12:03:10.451242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:48.377 [2024-11-18 12:03:10.451299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:57040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.377 [2024-11-18 12:03:10.451335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:48.377 [2024-11-18 12:03:10.451392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:57072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.377 [2024-11-18 12:03:10.451429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:48.377 [2024-11-18 12:03:10.451509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:57104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.377 [2024-11-18 12:03:10.451565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:48.377 [2024-11-18 12:03:10.451628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:57488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.377 [2024-11-18 12:03:10.451669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:48.377 [2024-11-18 12:03:10.451730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:57504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.377 [2024-11-18 12:03:10.451770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:48.377 [2024-11-18 12:03:10.451844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:57520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.377 [2024-11-18 12:03:10.451882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:48.377 [2024-11-18 12:03:10.451953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:56920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.377 [2024-11-18 12:03:10.451989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:48.377 [2024-11-18 12:03:10.452065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:56952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.377 [2024-11-18 12:03:10.452101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:48.377 [2024-11-18 12:03:10.452174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:56984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.378 [2024-11-18 12:03:10.452212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:48.378 [2024-11-18 12:03:10.452272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:57016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.378 [2024-11-18 12:03:10.452310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:48.378 [2024-11-18 12:03:10.452370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:57048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.378 [2024-11-18 12:03:10.452410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:48.378 [2024-11-18 12:03:10.452475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:57080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.378 [2024-11-18 12:03:10.452525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:48.378 [2024-11-18 12:03:10.452584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:57112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.378 [2024-11-18 12:03:10.452623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:48.378 [2024-11-18 12:03:10.452699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.378 [2024-11-18 12:03:10.452739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:48.378 [2024-11-18 12:03:10.452814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:57184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.378 [2024-11-18 12:03:10.452852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:48.378 [2024-11-18 12:03:10.452910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:57544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.378 [2024-11-18 12:03:10.452949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:48.378 [2024-11-18 12:03:10.453021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:57560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.378 [2024-11-18 12:03:10.453058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:48.378 [2024-11-18 12:03:10.453115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.378 [2024-11-18 12:03:10.453151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:48.378 [2024-11-18 12:03:10.453209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.378 [2024-11-18 12:03:10.453248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:48.378 [2024-11-18 12:03:10.453304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:57160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.378 [2024-11-18 12:03:10.453342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:48.378 [2024-11-18 12:03:10.453400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:57192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.378 [2024-11-18 12:03:10.453436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:48.378 [2024-11-18 12:03:10.453521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:57600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.378 [2024-11-18 12:03:10.453574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:48.378 [2024-11-18 12:03:10.453635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:57616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.378 [2024-11-18 12:03:10.453673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:48.378 [2024-11-18 12:03:10.453734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:57632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:48.378 [2024-11-18 12:03:10.453781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:48.378 5871.19 IOPS, 22.93 MiB/s [2024-11-18T11:03:14.263Z] 5884.67 IOPS, 22.99 MiB/s [2024-11-18T11:03:14.263Z] 5897.47 IOPS, 23.04 MiB/s [2024-11-18T11:03:14.263Z] Received shutdown signal, test time was about 34.642023 seconds 00:34:48.378 00:34:48.378 Latency(us) 00:34:48.378 [2024-11-18T11:03:14.263Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:48.378 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:48.378 Verification LBA range: start 0x0 length 0x4000 00:34:48.378 Nvme0n1 : 34.64 5899.79 23.05 0.00 0.00 21657.46 485.45 4026531.84 00:34:48.378 [2024-11-18T11:03:14.263Z] =================================================================================================================== 00:34:48.378 [2024-11-18T11:03:14.263Z] Total : 5899.79 23.05 0.00 0.00 21657.46 485.45 4026531.84 00:34:48.378 12:03:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:48.378 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:34:48.378 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:48.378 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:34:48.378 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:48.378 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:34:48.378 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:48.378 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:34:48.378 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:48.378 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:48.636 rmmod nvme_tcp 00:34:48.636 rmmod nvme_fabrics 00:34:48.636 rmmod nvme_keyring 00:34:48.636 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:48.636 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:34:48.636 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:34:48.636 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3103881 ']' 00:34:48.636 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3103881 00:34:48.636 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3103881 ']' 00:34:48.636 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3103881 00:34:48.636 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:34:48.636 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:48.636 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3103881 00:34:48.636 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:48.636 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:48.636 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3103881' 00:34:48.636 killing process with pid 3103881 00:34:48.636 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3103881 00:34:48.636 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3103881 00:34:50.012 12:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:50.012 12:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:50.012 12:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:50.012 12:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:34:50.012 12:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:34:50.012 12:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:50.012 12:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:34:50.012 12:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:50.012 12:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:50.012 12:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:50.012 12:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:50.012 12:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:51.916 12:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:51.916 00:34:51.916 real 0m46.948s 00:34:51.916 user 2m21.919s 00:34:51.916 sys 0m10.359s 00:34:51.916 12:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:51.916 12:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:51.916 ************************************ 00:34:51.916 END TEST nvmf_host_multipath_status 00:34:51.916 ************************************ 00:34:51.916 12:03:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:51.916 12:03:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:51.916 12:03:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:51.916 12:03:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.916 ************************************ 00:34:51.916 START TEST nvmf_discovery_remove_ifc 00:34:51.916 ************************************ 00:34:51.916 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:51.916 * Looking for test storage... 00:34:51.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:51.916 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:51.916 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:34:51.916 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:52.174 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:52.174 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:52.174 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:52.174 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:52.174 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:34:52.174 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:34:52.174 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:34:52.174 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:34:52.174 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:34:52.174 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:34:52.174 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:34:52.174 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:52.174 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:34:52.174 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:34:52.174 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:52.174 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:52.174 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:34:52.174 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:52.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:52.175 --rc genhtml_branch_coverage=1 00:34:52.175 --rc genhtml_function_coverage=1 00:34:52.175 --rc genhtml_legend=1 00:34:52.175 --rc geninfo_all_blocks=1 00:34:52.175 --rc geninfo_unexecuted_blocks=1 00:34:52.175 00:34:52.175 ' 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:52.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:52.175 --rc genhtml_branch_coverage=1 00:34:52.175 --rc genhtml_function_coverage=1 00:34:52.175 --rc genhtml_legend=1 00:34:52.175 --rc geninfo_all_blocks=1 00:34:52.175 --rc geninfo_unexecuted_blocks=1 00:34:52.175 00:34:52.175 ' 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:52.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:52.175 --rc genhtml_branch_coverage=1 00:34:52.175 --rc genhtml_function_coverage=1 00:34:52.175 --rc genhtml_legend=1 00:34:52.175 --rc geninfo_all_blocks=1 00:34:52.175 --rc geninfo_unexecuted_blocks=1 00:34:52.175 00:34:52.175 ' 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:52.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:52.175 --rc genhtml_branch_coverage=1 00:34:52.175 --rc genhtml_function_coverage=1 00:34:52.175 --rc genhtml_legend=1 00:34:52.175 --rc geninfo_all_blocks=1 00:34:52.175 --rc geninfo_unexecuted_blocks=1 00:34:52.175 00:34:52.175 ' 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:52.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:34:52.175 12:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:54.075 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:54.075 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:54.075 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:54.076 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:54.076 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:54.076 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:54.334 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:54.334 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:54.334 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:54.334 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:54.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:54.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:34:54.334 00:34:54.334 --- 10.0.0.2 ping statistics --- 00:34:54.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:54.334 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:34:54.334 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:54.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:54.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:34:54.334 00:34:54.334 --- 10.0.0.1 ping statistics --- 00:34:54.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:54.334 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:34:54.334 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:54.334 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:34:54.334 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:54.334 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:54.334 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:54.334 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:54.334 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:54.334 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:54.334 12:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:54.334 12:03:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:34:54.334 12:03:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:54.334 12:03:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:54.334 12:03:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:54.334 12:03:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3111555 00:34:54.334 12:03:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:54.334 12:03:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3111555 00:34:54.334 12:03:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3111555 ']' 00:34:54.334 12:03:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:54.334 12:03:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:54.334 12:03:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:54.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:54.334 12:03:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:54.334 12:03:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:54.334 [2024-11-18 12:03:20.104497] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:34:54.334 [2024-11-18 12:03:20.104653] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:54.592 [2024-11-18 12:03:20.260255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:54.592 [2024-11-18 12:03:20.396287] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:54.592 [2024-11-18 12:03:20.396374] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:54.592 [2024-11-18 12:03:20.396414] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:54.592 [2024-11-18 12:03:20.396451] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:54.592 [2024-11-18 12:03:20.396483] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:54.592 [2024-11-18 12:03:20.398222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:55.525 12:03:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:55.525 12:03:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:34:55.525 12:03:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:55.525 12:03:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:55.525 12:03:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:55.525 12:03:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:55.525 12:03:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:34:55.525 12:03:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.525 12:03:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:55.525 [2024-11-18 12:03:21.118813] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:55.525 [2024-11-18 12:03:21.127040] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:55.525 null0 00:34:55.525 [2024-11-18 12:03:21.158971] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:55.525 12:03:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.525 12:03:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3111727 00:34:55.525 12:03:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3111727 /tmp/host.sock 00:34:55.525 12:03:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:34:55.525 12:03:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3111727 ']' 00:34:55.525 12:03:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:34:55.525 12:03:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:55.525 12:03:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:55.525 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:55.525 12:03:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:55.525 12:03:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:55.525 [2024-11-18 12:03:21.274642] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:34:55.525 [2024-11-18 12:03:21.274807] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3111727 ] 00:34:55.525 [2024-11-18 12:03:21.410277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:55.784 [2024-11-18 12:03:21.534378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:56.718 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:56.718 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:34:56.718 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:56.718 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:34:56.718 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.718 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:56.718 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.718 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:34:56.718 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.718 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:56.976 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.976 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:34:56.976 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.976 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:57.910 [2024-11-18 12:03:23.691692] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:57.910 [2024-11-18 12:03:23.691737] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:57.910 [2024-11-18 12:03:23.691791] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:57.910 [2024-11-18 12:03:23.778134] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:58.168 [2024-11-18 12:03:23.879318] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:34:58.168 [2024-11-18 12:03:23.881034] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x6150001f2c80:1 started. 00:34:58.168 [2024-11-18 12:03:23.883401] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:58.168 [2024-11-18 12:03:23.883503] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:58.168 [2024-11-18 12:03:23.883602] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:58.168 [2024-11-18 12:03:23.883638] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:58.168 [2024-11-18 12:03:23.883683] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:58.168 12:03:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.168 12:03:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:34:58.168 12:03:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:58.168 12:03:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:58.168 12:03:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.168 12:03:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:58.168 12:03:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:58.168 12:03:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:58.168 12:03:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:58.168 [2024-11-18 12:03:23.890025] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6150001f2c80 was disconnected and freed. delete nvme_qpair. 00:34:58.168 12:03:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.168 12:03:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:34:58.168 12:03:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:34:58.168 12:03:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:34:58.168 12:03:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:34:58.168 12:03:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:58.168 12:03:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:58.168 12:03:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:58.168 12:03:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.168 12:03:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:58.168 12:03:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:58.168 12:03:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:58.168 12:03:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.168 12:03:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:58.168 12:03:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:59.541 12:03:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:59.541 12:03:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:59.541 12:03:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.541 12:03:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:59.541 12:03:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:59.541 12:03:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:59.541 12:03:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:59.541 12:03:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.541 12:03:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:59.541 12:03:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:00.475 12:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:00.475 12:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:00.475 12:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.475 12:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:00.475 12:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:00.475 12:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:00.475 12:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:00.475 12:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.475 12:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:00.476 12:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:01.409 12:03:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:01.409 12:03:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:01.409 12:03:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:01.409 12:03:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.409 12:03:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:01.409 12:03:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:01.409 12:03:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:01.409 12:03:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.409 12:03:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:01.409 12:03:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:02.343 12:03:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:02.343 12:03:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:02.343 12:03:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:02.343 12:03:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.343 12:03:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:02.343 12:03:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:02.343 12:03:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:02.343 12:03:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.343 12:03:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:02.343 12:03:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:03.716 12:03:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:03.716 12:03:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:03.716 12:03:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:03.716 12:03:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.716 12:03:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:03.716 12:03:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:03.716 12:03:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:03.716 12:03:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.716 12:03:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:03.716 12:03:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:03.716 [2024-11-18 12:03:29.324780] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:35:03.716 [2024-11-18 12:03:29.324908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:03.716 [2024-11-18 12:03:29.324945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.716 [2024-11-18 12:03:29.324977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:03.716 [2024-11-18 12:03:29.325000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.716 [2024-11-18 12:03:29.325024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:03.716 [2024-11-18 12:03:29.325046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.716 [2024-11-18 12:03:29.325070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:03.716 [2024-11-18 12:03:29.325093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.716 [2024-11-18 12:03:29.325117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:03.716 [2024-11-18 12:03:29.325140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.716 [2024-11-18 12:03:29.325161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:35:03.716 [2024-11-18 12:03:29.334788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:35:03.716 [2024-11-18 12:03:29.344853] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:35:03.716 [2024-11-18 12:03:29.344894] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:35:03.716 [2024-11-18 12:03:29.344914] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:03.716 [2024-11-18 12:03:29.344931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:03.716 [2024-11-18 12:03:29.345010] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:04.649 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:04.649 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:04.649 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.649 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:04.649 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:04.649 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:04.649 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:04.649 [2024-11-18 12:03:30.352558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:35:04.649 [2024-11-18 12:03:30.352667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:04.649 [2024-11-18 12:03:30.352709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:35:04.649 [2024-11-18 12:03:30.352787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:35:04.649 [2024-11-18 12:03:30.353573] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:35:04.649 [2024-11-18 12:03:30.353641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:04.649 [2024-11-18 12:03:30.353674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:04.649 [2024-11-18 12:03:30.353698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:04.649 [2024-11-18 12:03:30.353721] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:04.649 [2024-11-18 12:03:30.353739] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:04.650 [2024-11-18 12:03:30.353753] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:04.650 [2024-11-18 12:03:30.353774] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:04.650 [2024-11-18 12:03:30.353805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:04.650 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.650 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:04.650 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:05.583 [2024-11-18 12:03:31.356347] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:05.583 [2024-11-18 12:03:31.356413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:05.583 [2024-11-18 12:03:31.356447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:05.583 [2024-11-18 12:03:31.356480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:05.583 [2024-11-18 12:03:31.356514] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:35:05.583 [2024-11-18 12:03:31.356553] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:05.583 [2024-11-18 12:03:31.356568] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:05.583 [2024-11-18 12:03:31.356580] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:05.583 [2024-11-18 12:03:31.356665] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:35:05.583 [2024-11-18 12:03:31.356740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:05.583 [2024-11-18 12:03:31.356772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:05.583 [2024-11-18 12:03:31.356814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:05.583 [2024-11-18 12:03:31.356834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:05.583 [2024-11-18 12:03:31.356870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:05.583 [2024-11-18 12:03:31.356893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:05.583 [2024-11-18 12:03:31.356917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:05.583 [2024-11-18 12:03:31.356939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:05.583 [2024-11-18 12:03:31.356962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:05.583 [2024-11-18 12:03:31.356984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:05.583 [2024-11-18 12:03:31.357006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:35:05.583 [2024-11-18 12:03:31.357170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:35:05.583 [2024-11-18 12:03:31.358160] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:35:05.583 [2024-11-18 12:03:31.358194] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:35:05.583 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:05.583 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:05.583 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:05.583 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.583 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:05.583 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:05.583 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:05.583 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.583 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:35:05.583 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:05.583 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:05.583 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:35:05.583 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:05.583 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:05.583 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:05.583 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.583 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:05.583 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:05.583 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:05.583 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.841 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:05.841 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:06.775 12:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:06.775 12:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:06.775 12:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:06.775 12:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.775 12:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:06.775 12:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:06.775 12:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:06.775 12:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.775 12:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:06.775 12:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:07.710 [2024-11-18 12:03:33.377133] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:07.710 [2024-11-18 12:03:33.377189] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:07.710 [2024-11-18 12:03:33.377247] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:07.710 [2024-11-18 12:03:33.504716] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:35:07.710 12:03:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:07.710 12:03:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:07.710 12:03:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.710 12:03:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:07.710 12:03:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:07.710 12:03:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:07.710 12:03:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:07.710 12:03:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.710 12:03:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:07.710 12:03:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:07.967 [2024-11-18 12:03:33.605902] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:35:07.967 [2024-11-18 12:03:33.607403] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x6150001f3900:1 started. 00:35:07.968 [2024-11-18 12:03:33.609758] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:07.968 [2024-11-18 12:03:33.609848] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:07.968 [2024-11-18 12:03:33.609931] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:07.968 [2024-11-18 12:03:33.609971] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:35:07.968 [2024-11-18 12:03:33.609995] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:07.968 [2024-11-18 12:03:33.616480] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x6150001f3900 was disconnected and freed. delete nvme_qpair. 00:35:08.901 12:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:08.901 12:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:08.901 12:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:08.901 12:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.901 12:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:08.901 12:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:08.901 12:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:08.901 12:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.901 12:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:35:08.901 12:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:35:08.901 12:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3111727 00:35:08.901 12:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3111727 ']' 00:35:08.901 12:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3111727 00:35:08.901 12:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:35:08.901 12:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:08.901 12:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3111727 00:35:08.901 12:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:08.901 12:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:08.901 12:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3111727' 00:35:08.901 killing process with pid 3111727 00:35:08.901 12:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3111727 00:35:08.901 12:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3111727 00:35:09.835 12:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:35:09.835 12:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:09.835 12:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:35:09.835 12:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:09.835 12:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:35:09.835 12:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:09.835 12:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:09.835 rmmod nvme_tcp 00:35:09.835 rmmod nvme_fabrics 00:35:09.835 rmmod nvme_keyring 00:35:09.835 12:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:09.835 12:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:35:09.835 12:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:35:09.835 12:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3111555 ']' 00:35:09.835 12:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3111555 00:35:09.835 12:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3111555 ']' 00:35:09.835 12:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3111555 00:35:09.835 12:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:35:09.835 12:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:09.835 12:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3111555 00:35:09.835 12:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:09.835 12:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:09.835 12:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3111555' 00:35:09.835 killing process with pid 3111555 00:35:09.835 12:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3111555 00:35:09.835 12:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3111555 00:35:11.209 12:03:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:11.209 12:03:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:11.209 12:03:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:11.209 12:03:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:35:11.209 12:03:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:35:11.209 12:03:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:11.209 12:03:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:35:11.209 12:03:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:11.209 12:03:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:11.209 12:03:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:11.209 12:03:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:11.209 12:03:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:13.133 12:03:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:13.133 00:35:13.133 real 0m21.118s 00:35:13.133 user 0m31.062s 00:35:13.133 sys 0m3.295s 00:35:13.133 12:03:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:13.133 12:03:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:13.133 ************************************ 00:35:13.133 END TEST nvmf_discovery_remove_ifc 00:35:13.133 ************************************ 00:35:13.133 12:03:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:13.133 12:03:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:13.133 12:03:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:13.133 12:03:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.133 ************************************ 00:35:13.134 START TEST nvmf_identify_kernel_target 00:35:13.134 ************************************ 00:35:13.134 12:03:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:13.134 * Looking for test storage... 00:35:13.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:13.134 12:03:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:13.134 12:03:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:35:13.134 12:03:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:13.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.476 --rc genhtml_branch_coverage=1 00:35:13.476 --rc genhtml_function_coverage=1 00:35:13.476 --rc genhtml_legend=1 00:35:13.476 --rc geninfo_all_blocks=1 00:35:13.476 --rc geninfo_unexecuted_blocks=1 00:35:13.476 00:35:13.476 ' 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:13.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.476 --rc genhtml_branch_coverage=1 00:35:13.476 --rc genhtml_function_coverage=1 00:35:13.476 --rc genhtml_legend=1 00:35:13.476 --rc geninfo_all_blocks=1 00:35:13.476 --rc geninfo_unexecuted_blocks=1 00:35:13.476 00:35:13.476 ' 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:13.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.476 --rc genhtml_branch_coverage=1 00:35:13.476 --rc genhtml_function_coverage=1 00:35:13.476 --rc genhtml_legend=1 00:35:13.476 --rc geninfo_all_blocks=1 00:35:13.476 --rc geninfo_unexecuted_blocks=1 00:35:13.476 00:35:13.476 ' 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:13.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.476 --rc genhtml_branch_coverage=1 00:35:13.476 --rc genhtml_function_coverage=1 00:35:13.476 --rc genhtml_legend=1 00:35:13.476 --rc geninfo_all_blocks=1 00:35:13.476 --rc geninfo_unexecuted_blocks=1 00:35:13.476 00:35:13.476 ' 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:13.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:35:13.476 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:13.477 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:13.477 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:13.477 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:13.477 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:13.477 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:13.477 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:13.477 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:13.477 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:13.477 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:13.477 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:35:13.477 12:03:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:15.380 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:15.380 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:15.380 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:15.380 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:15.380 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:15.381 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:15.381 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:15.381 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:15.381 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:15.381 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:15.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:15.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:35:15.640 00:35:15.640 --- 10.0.0.2 ping statistics --- 00:35:15.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:15.640 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:15.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:15.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:35:15.640 00:35:15.640 --- 10.0.0.1 ping statistics --- 00:35:15.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:15.640 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:15.640 12:03:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:16.575 Waiting for block devices as requested 00:35:16.575 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:16.833 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:16.833 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:17.092 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:17.092 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:17.092 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:17.092 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:17.352 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:17.352 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:17.352 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:17.352 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:17.612 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:17.612 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:17.612 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:17.612 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:17.873 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:17.873 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:17.873 12:03:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:17.873 12:03:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:18.132 12:03:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:18.132 12:03:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:18.132 12:03:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:18.132 12:03:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:18.132 12:03:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:18.132 12:03:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:18.132 12:03:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:18.132 No valid GPT data, bailing 00:35:18.132 12:03:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:18.132 12:03:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:35:18.132 12:03:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:35:18.132 12:03:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:18.132 12:03:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:18.132 12:03:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:18.132 12:03:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:18.132 12:03:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:18.132 12:03:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:18.132 12:03:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:35:18.132 12:03:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:18.132 12:03:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:35:18.132 12:03:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:18.132 12:03:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:35:18.132 12:03:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:35:18.132 12:03:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:35:18.132 12:03:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:18.132 12:03:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:18.132 00:35:18.132 Discovery Log Number of Records 2, Generation counter 2 00:35:18.132 =====Discovery Log Entry 0====== 00:35:18.132 trtype: tcp 00:35:18.132 adrfam: ipv4 00:35:18.132 subtype: current discovery subsystem 00:35:18.132 treq: not specified, sq flow control disable supported 00:35:18.132 portid: 1 00:35:18.132 trsvcid: 4420 00:35:18.132 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:18.132 traddr: 10.0.0.1 00:35:18.132 eflags: none 00:35:18.132 sectype: none 00:35:18.132 =====Discovery Log Entry 1====== 00:35:18.132 trtype: tcp 00:35:18.132 adrfam: ipv4 00:35:18.132 subtype: nvme subsystem 00:35:18.132 treq: not specified, sq flow control disable supported 00:35:18.132 portid: 1 00:35:18.132 trsvcid: 4420 00:35:18.132 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:18.132 traddr: 10.0.0.1 00:35:18.132 eflags: none 00:35:18.132 sectype: none 00:35:18.132 12:03:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:35:18.132 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:35:18.391 ===================================================== 00:35:18.391 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:35:18.391 ===================================================== 00:35:18.391 Controller Capabilities/Features 00:35:18.391 ================================ 00:35:18.391 Vendor ID: 0000 00:35:18.391 Subsystem Vendor ID: 0000 00:35:18.391 Serial Number: 4397d694521d1ce30c70 00:35:18.391 Model Number: Linux 00:35:18.391 Firmware Version: 6.8.9-20 00:35:18.391 Recommended Arb Burst: 0 00:35:18.391 IEEE OUI Identifier: 00 00 00 00:35:18.391 Multi-path I/O 00:35:18.391 May have multiple subsystem ports: No 00:35:18.391 May have multiple controllers: No 00:35:18.391 Associated with SR-IOV VF: No 00:35:18.391 Max Data Transfer Size: Unlimited 00:35:18.391 Max Number of Namespaces: 0 00:35:18.391 Max Number of I/O Queues: 1024 00:35:18.391 NVMe Specification Version (VS): 1.3 00:35:18.391 NVMe Specification Version (Identify): 1.3 00:35:18.391 Maximum Queue Entries: 1024 00:35:18.391 Contiguous Queues Required: No 00:35:18.391 Arbitration Mechanisms Supported 00:35:18.391 Weighted Round Robin: Not Supported 00:35:18.391 Vendor Specific: Not Supported 00:35:18.391 Reset Timeout: 7500 ms 00:35:18.391 Doorbell Stride: 4 bytes 00:35:18.391 NVM Subsystem Reset: Not Supported 00:35:18.391 Command Sets Supported 00:35:18.391 NVM Command Set: Supported 00:35:18.391 Boot Partition: Not Supported 00:35:18.391 Memory Page Size Minimum: 4096 bytes 00:35:18.391 Memory Page Size Maximum: 4096 bytes 00:35:18.391 Persistent Memory Region: Not Supported 00:35:18.391 Optional Asynchronous Events Supported 00:35:18.391 Namespace Attribute Notices: Not Supported 00:35:18.391 Firmware Activation Notices: Not Supported 00:35:18.391 ANA Change Notices: Not Supported 00:35:18.391 PLE Aggregate Log Change Notices: Not Supported 00:35:18.391 LBA Status Info Alert Notices: Not Supported 00:35:18.391 EGE Aggregate Log Change Notices: Not Supported 00:35:18.391 Normal NVM Subsystem Shutdown event: Not Supported 00:35:18.391 Zone Descriptor Change Notices: Not Supported 00:35:18.391 Discovery Log Change Notices: Supported 00:35:18.391 Controller Attributes 00:35:18.391 128-bit Host Identifier: Not Supported 00:35:18.391 Non-Operational Permissive Mode: Not Supported 00:35:18.391 NVM Sets: Not Supported 00:35:18.391 Read Recovery Levels: Not Supported 00:35:18.391 Endurance Groups: Not Supported 00:35:18.391 Predictable Latency Mode: Not Supported 00:35:18.391 Traffic Based Keep ALive: Not Supported 00:35:18.391 Namespace Granularity: Not Supported 00:35:18.391 SQ Associations: Not Supported 00:35:18.391 UUID List: Not Supported 00:35:18.391 Multi-Domain Subsystem: Not Supported 00:35:18.391 Fixed Capacity Management: Not Supported 00:35:18.391 Variable Capacity Management: Not Supported 00:35:18.391 Delete Endurance Group: Not Supported 00:35:18.391 Delete NVM Set: Not Supported 00:35:18.391 Extended LBA Formats Supported: Not Supported 00:35:18.391 Flexible Data Placement Supported: Not Supported 00:35:18.391 00:35:18.391 Controller Memory Buffer Support 00:35:18.391 ================================ 00:35:18.391 Supported: No 00:35:18.391 00:35:18.391 Persistent Memory Region Support 00:35:18.392 ================================ 00:35:18.392 Supported: No 00:35:18.392 00:35:18.392 Admin Command Set Attributes 00:35:18.392 ============================ 00:35:18.392 Security Send/Receive: Not Supported 00:35:18.392 Format NVM: Not Supported 00:35:18.392 Firmware Activate/Download: Not Supported 00:35:18.392 Namespace Management: Not Supported 00:35:18.392 Device Self-Test: Not Supported 00:35:18.392 Directives: Not Supported 00:35:18.392 NVMe-MI: Not Supported 00:35:18.392 Virtualization Management: Not Supported 00:35:18.392 Doorbell Buffer Config: Not Supported 00:35:18.392 Get LBA Status Capability: Not Supported 00:35:18.392 Command & Feature Lockdown Capability: Not Supported 00:35:18.392 Abort Command Limit: 1 00:35:18.392 Async Event Request Limit: 1 00:35:18.392 Number of Firmware Slots: N/A 00:35:18.392 Firmware Slot 1 Read-Only: N/A 00:35:18.392 Firmware Activation Without Reset: N/A 00:35:18.392 Multiple Update Detection Support: N/A 00:35:18.392 Firmware Update Granularity: No Information Provided 00:35:18.392 Per-Namespace SMART Log: No 00:35:18.392 Asymmetric Namespace Access Log Page: Not Supported 00:35:18.392 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:35:18.392 Command Effects Log Page: Not Supported 00:35:18.392 Get Log Page Extended Data: Supported 00:35:18.392 Telemetry Log Pages: Not Supported 00:35:18.392 Persistent Event Log Pages: Not Supported 00:35:18.392 Supported Log Pages Log Page: May Support 00:35:18.392 Commands Supported & Effects Log Page: Not Supported 00:35:18.392 Feature Identifiers & Effects Log Page:May Support 00:35:18.392 NVMe-MI Commands & Effects Log Page: May Support 00:35:18.392 Data Area 4 for Telemetry Log: Not Supported 00:35:18.392 Error Log Page Entries Supported: 1 00:35:18.392 Keep Alive: Not Supported 00:35:18.392 00:35:18.392 NVM Command Set Attributes 00:35:18.392 ========================== 00:35:18.392 Submission Queue Entry Size 00:35:18.392 Max: 1 00:35:18.392 Min: 1 00:35:18.392 Completion Queue Entry Size 00:35:18.392 Max: 1 00:35:18.392 Min: 1 00:35:18.392 Number of Namespaces: 0 00:35:18.392 Compare Command: Not Supported 00:35:18.392 Write Uncorrectable Command: Not Supported 00:35:18.392 Dataset Management Command: Not Supported 00:35:18.392 Write Zeroes Command: Not Supported 00:35:18.392 Set Features Save Field: Not Supported 00:35:18.392 Reservations: Not Supported 00:35:18.392 Timestamp: Not Supported 00:35:18.392 Copy: Not Supported 00:35:18.392 Volatile Write Cache: Not Present 00:35:18.392 Atomic Write Unit (Normal): 1 00:35:18.392 Atomic Write Unit (PFail): 1 00:35:18.392 Atomic Compare & Write Unit: 1 00:35:18.392 Fused Compare & Write: Not Supported 00:35:18.392 Scatter-Gather List 00:35:18.392 SGL Command Set: Supported 00:35:18.392 SGL Keyed: Not Supported 00:35:18.392 SGL Bit Bucket Descriptor: Not Supported 00:35:18.392 SGL Metadata Pointer: Not Supported 00:35:18.392 Oversized SGL: Not Supported 00:35:18.392 SGL Metadata Address: Not Supported 00:35:18.392 SGL Offset: Supported 00:35:18.392 Transport SGL Data Block: Not Supported 00:35:18.392 Replay Protected Memory Block: Not Supported 00:35:18.392 00:35:18.392 Firmware Slot Information 00:35:18.392 ========================= 00:35:18.392 Active slot: 0 00:35:18.392 00:35:18.392 00:35:18.392 Error Log 00:35:18.392 ========= 00:35:18.392 00:35:18.392 Active Namespaces 00:35:18.392 ================= 00:35:18.392 Discovery Log Page 00:35:18.392 ================== 00:35:18.392 Generation Counter: 2 00:35:18.392 Number of Records: 2 00:35:18.392 Record Format: 0 00:35:18.392 00:35:18.392 Discovery Log Entry 0 00:35:18.392 ---------------------- 00:35:18.392 Transport Type: 3 (TCP) 00:35:18.392 Address Family: 1 (IPv4) 00:35:18.392 Subsystem Type: 3 (Current Discovery Subsystem) 00:35:18.392 Entry Flags: 00:35:18.392 Duplicate Returned Information: 0 00:35:18.392 Explicit Persistent Connection Support for Discovery: 0 00:35:18.392 Transport Requirements: 00:35:18.392 Secure Channel: Not Specified 00:35:18.392 Port ID: 1 (0x0001) 00:35:18.392 Controller ID: 65535 (0xffff) 00:35:18.392 Admin Max SQ Size: 32 00:35:18.392 Transport Service Identifier: 4420 00:35:18.392 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:35:18.392 Transport Address: 10.0.0.1 00:35:18.392 Discovery Log Entry 1 00:35:18.392 ---------------------- 00:35:18.392 Transport Type: 3 (TCP) 00:35:18.392 Address Family: 1 (IPv4) 00:35:18.392 Subsystem Type: 2 (NVM Subsystem) 00:35:18.392 Entry Flags: 00:35:18.392 Duplicate Returned Information: 0 00:35:18.392 Explicit Persistent Connection Support for Discovery: 0 00:35:18.392 Transport Requirements: 00:35:18.392 Secure Channel: Not Specified 00:35:18.392 Port ID: 1 (0x0001) 00:35:18.392 Controller ID: 65535 (0xffff) 00:35:18.392 Admin Max SQ Size: 32 00:35:18.392 Transport Service Identifier: 4420 00:35:18.392 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:35:18.392 Transport Address: 10.0.0.1 00:35:18.392 12:03:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:18.651 get_feature(0x01) failed 00:35:18.651 get_feature(0x02) failed 00:35:18.651 get_feature(0x04) failed 00:35:18.651 ===================================================== 00:35:18.651 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:18.651 ===================================================== 00:35:18.651 Controller Capabilities/Features 00:35:18.651 ================================ 00:35:18.651 Vendor ID: 0000 00:35:18.651 Subsystem Vendor ID: 0000 00:35:18.651 Serial Number: 688235674fc5f24fb8a4 00:35:18.651 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:35:18.651 Firmware Version: 6.8.9-20 00:35:18.651 Recommended Arb Burst: 6 00:35:18.651 IEEE OUI Identifier: 00 00 00 00:35:18.651 Multi-path I/O 00:35:18.651 May have multiple subsystem ports: Yes 00:35:18.651 May have multiple controllers: Yes 00:35:18.651 Associated with SR-IOV VF: No 00:35:18.651 Max Data Transfer Size: Unlimited 00:35:18.651 Max Number of Namespaces: 1024 00:35:18.651 Max Number of I/O Queues: 128 00:35:18.652 NVMe Specification Version (VS): 1.3 00:35:18.652 NVMe Specification Version (Identify): 1.3 00:35:18.652 Maximum Queue Entries: 1024 00:35:18.652 Contiguous Queues Required: No 00:35:18.652 Arbitration Mechanisms Supported 00:35:18.652 Weighted Round Robin: Not Supported 00:35:18.652 Vendor Specific: Not Supported 00:35:18.652 Reset Timeout: 7500 ms 00:35:18.652 Doorbell Stride: 4 bytes 00:35:18.652 NVM Subsystem Reset: Not Supported 00:35:18.652 Command Sets Supported 00:35:18.652 NVM Command Set: Supported 00:35:18.652 Boot Partition: Not Supported 00:35:18.652 Memory Page Size Minimum: 4096 bytes 00:35:18.652 Memory Page Size Maximum: 4096 bytes 00:35:18.652 Persistent Memory Region: Not Supported 00:35:18.652 Optional Asynchronous Events Supported 00:35:18.652 Namespace Attribute Notices: Supported 00:35:18.652 Firmware Activation Notices: Not Supported 00:35:18.652 ANA Change Notices: Supported 00:35:18.652 PLE Aggregate Log Change Notices: Not Supported 00:35:18.652 LBA Status Info Alert Notices: Not Supported 00:35:18.652 EGE Aggregate Log Change Notices: Not Supported 00:35:18.652 Normal NVM Subsystem Shutdown event: Not Supported 00:35:18.652 Zone Descriptor Change Notices: Not Supported 00:35:18.652 Discovery Log Change Notices: Not Supported 00:35:18.652 Controller Attributes 00:35:18.652 128-bit Host Identifier: Supported 00:35:18.652 Non-Operational Permissive Mode: Not Supported 00:35:18.652 NVM Sets: Not Supported 00:35:18.652 Read Recovery Levels: Not Supported 00:35:18.652 Endurance Groups: Not Supported 00:35:18.652 Predictable Latency Mode: Not Supported 00:35:18.652 Traffic Based Keep ALive: Supported 00:35:18.652 Namespace Granularity: Not Supported 00:35:18.652 SQ Associations: Not Supported 00:35:18.652 UUID List: Not Supported 00:35:18.652 Multi-Domain Subsystem: Not Supported 00:35:18.652 Fixed Capacity Management: Not Supported 00:35:18.652 Variable Capacity Management: Not Supported 00:35:18.652 Delete Endurance Group: Not Supported 00:35:18.652 Delete NVM Set: Not Supported 00:35:18.652 Extended LBA Formats Supported: Not Supported 00:35:18.652 Flexible Data Placement Supported: Not Supported 00:35:18.652 00:35:18.652 Controller Memory Buffer Support 00:35:18.652 ================================ 00:35:18.652 Supported: No 00:35:18.652 00:35:18.652 Persistent Memory Region Support 00:35:18.652 ================================ 00:35:18.652 Supported: No 00:35:18.652 00:35:18.652 Admin Command Set Attributes 00:35:18.652 ============================ 00:35:18.652 Security Send/Receive: Not Supported 00:35:18.652 Format NVM: Not Supported 00:35:18.652 Firmware Activate/Download: Not Supported 00:35:18.652 Namespace Management: Not Supported 00:35:18.652 Device Self-Test: Not Supported 00:35:18.652 Directives: Not Supported 00:35:18.652 NVMe-MI: Not Supported 00:35:18.652 Virtualization Management: Not Supported 00:35:18.652 Doorbell Buffer Config: Not Supported 00:35:18.652 Get LBA Status Capability: Not Supported 00:35:18.652 Command & Feature Lockdown Capability: Not Supported 00:35:18.652 Abort Command Limit: 4 00:35:18.652 Async Event Request Limit: 4 00:35:18.652 Number of Firmware Slots: N/A 00:35:18.652 Firmware Slot 1 Read-Only: N/A 00:35:18.652 Firmware Activation Without Reset: N/A 00:35:18.652 Multiple Update Detection Support: N/A 00:35:18.652 Firmware Update Granularity: No Information Provided 00:35:18.652 Per-Namespace SMART Log: Yes 00:35:18.652 Asymmetric Namespace Access Log Page: Supported 00:35:18.652 ANA Transition Time : 10 sec 00:35:18.652 00:35:18.652 Asymmetric Namespace Access Capabilities 00:35:18.652 ANA Optimized State : Supported 00:35:18.652 ANA Non-Optimized State : Supported 00:35:18.652 ANA Inaccessible State : Supported 00:35:18.652 ANA Persistent Loss State : Supported 00:35:18.652 ANA Change State : Supported 00:35:18.652 ANAGRPID is not changed : No 00:35:18.652 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:35:18.652 00:35:18.652 ANA Group Identifier Maximum : 128 00:35:18.652 Number of ANA Group Identifiers : 128 00:35:18.652 Max Number of Allowed Namespaces : 1024 00:35:18.652 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:35:18.652 Command Effects Log Page: Supported 00:35:18.652 Get Log Page Extended Data: Supported 00:35:18.652 Telemetry Log Pages: Not Supported 00:35:18.652 Persistent Event Log Pages: Not Supported 00:35:18.652 Supported Log Pages Log Page: May Support 00:35:18.652 Commands Supported & Effects Log Page: Not Supported 00:35:18.652 Feature Identifiers & Effects Log Page:May Support 00:35:18.652 NVMe-MI Commands & Effects Log Page: May Support 00:35:18.652 Data Area 4 for Telemetry Log: Not Supported 00:35:18.652 Error Log Page Entries Supported: 128 00:35:18.652 Keep Alive: Supported 00:35:18.652 Keep Alive Granularity: 1000 ms 00:35:18.652 00:35:18.652 NVM Command Set Attributes 00:35:18.652 ========================== 00:35:18.652 Submission Queue Entry Size 00:35:18.652 Max: 64 00:35:18.652 Min: 64 00:35:18.652 Completion Queue Entry Size 00:35:18.652 Max: 16 00:35:18.652 Min: 16 00:35:18.652 Number of Namespaces: 1024 00:35:18.652 Compare Command: Not Supported 00:35:18.652 Write Uncorrectable Command: Not Supported 00:35:18.652 Dataset Management Command: Supported 00:35:18.652 Write Zeroes Command: Supported 00:35:18.652 Set Features Save Field: Not Supported 00:35:18.652 Reservations: Not Supported 00:35:18.652 Timestamp: Not Supported 00:35:18.652 Copy: Not Supported 00:35:18.652 Volatile Write Cache: Present 00:35:18.652 Atomic Write Unit (Normal): 1 00:35:18.652 Atomic Write Unit (PFail): 1 00:35:18.652 Atomic Compare & Write Unit: 1 00:35:18.652 Fused Compare & Write: Not Supported 00:35:18.652 Scatter-Gather List 00:35:18.652 SGL Command Set: Supported 00:35:18.652 SGL Keyed: Not Supported 00:35:18.652 SGL Bit Bucket Descriptor: Not Supported 00:35:18.652 SGL Metadata Pointer: Not Supported 00:35:18.652 Oversized SGL: Not Supported 00:35:18.652 SGL Metadata Address: Not Supported 00:35:18.652 SGL Offset: Supported 00:35:18.652 Transport SGL Data Block: Not Supported 00:35:18.652 Replay Protected Memory Block: Not Supported 00:35:18.652 00:35:18.652 Firmware Slot Information 00:35:18.652 ========================= 00:35:18.652 Active slot: 0 00:35:18.652 00:35:18.652 Asymmetric Namespace Access 00:35:18.652 =========================== 00:35:18.652 Change Count : 0 00:35:18.652 Number of ANA Group Descriptors : 1 00:35:18.652 ANA Group Descriptor : 0 00:35:18.652 ANA Group ID : 1 00:35:18.652 Number of NSID Values : 1 00:35:18.652 Change Count : 0 00:35:18.652 ANA State : 1 00:35:18.652 Namespace Identifier : 1 00:35:18.652 00:35:18.652 Commands Supported and Effects 00:35:18.652 ============================== 00:35:18.652 Admin Commands 00:35:18.652 -------------- 00:35:18.652 Get Log Page (02h): Supported 00:35:18.652 Identify (06h): Supported 00:35:18.652 Abort (08h): Supported 00:35:18.652 Set Features (09h): Supported 00:35:18.652 Get Features (0Ah): Supported 00:35:18.653 Asynchronous Event Request (0Ch): Supported 00:35:18.653 Keep Alive (18h): Supported 00:35:18.653 I/O Commands 00:35:18.653 ------------ 00:35:18.653 Flush (00h): Supported 00:35:18.653 Write (01h): Supported LBA-Change 00:35:18.653 Read (02h): Supported 00:35:18.653 Write Zeroes (08h): Supported LBA-Change 00:35:18.653 Dataset Management (09h): Supported 00:35:18.653 00:35:18.653 Error Log 00:35:18.653 ========= 00:35:18.653 Entry: 0 00:35:18.653 Error Count: 0x3 00:35:18.653 Submission Queue Id: 0x0 00:35:18.653 Command Id: 0x5 00:35:18.653 Phase Bit: 0 00:35:18.653 Status Code: 0x2 00:35:18.653 Status Code Type: 0x0 00:35:18.653 Do Not Retry: 1 00:35:18.653 Error Location: 0x28 00:35:18.653 LBA: 0x0 00:35:18.653 Namespace: 0x0 00:35:18.653 Vendor Log Page: 0x0 00:35:18.653 ----------- 00:35:18.653 Entry: 1 00:35:18.653 Error Count: 0x2 00:35:18.653 Submission Queue Id: 0x0 00:35:18.653 Command Id: 0x5 00:35:18.653 Phase Bit: 0 00:35:18.653 Status Code: 0x2 00:35:18.653 Status Code Type: 0x0 00:35:18.653 Do Not Retry: 1 00:35:18.653 Error Location: 0x28 00:35:18.653 LBA: 0x0 00:35:18.653 Namespace: 0x0 00:35:18.653 Vendor Log Page: 0x0 00:35:18.653 ----------- 00:35:18.653 Entry: 2 00:35:18.653 Error Count: 0x1 00:35:18.653 Submission Queue Id: 0x0 00:35:18.653 Command Id: 0x4 00:35:18.653 Phase Bit: 0 00:35:18.653 Status Code: 0x2 00:35:18.653 Status Code Type: 0x0 00:35:18.653 Do Not Retry: 1 00:35:18.653 Error Location: 0x28 00:35:18.653 LBA: 0x0 00:35:18.653 Namespace: 0x0 00:35:18.653 Vendor Log Page: 0x0 00:35:18.653 00:35:18.653 Number of Queues 00:35:18.653 ================ 00:35:18.653 Number of I/O Submission Queues: 128 00:35:18.653 Number of I/O Completion Queues: 128 00:35:18.653 00:35:18.653 ZNS Specific Controller Data 00:35:18.653 ============================ 00:35:18.653 Zone Append Size Limit: 0 00:35:18.653 00:35:18.653 00:35:18.653 Active Namespaces 00:35:18.653 ================= 00:35:18.653 get_feature(0x05) failed 00:35:18.653 Namespace ID:1 00:35:18.653 Command Set Identifier: NVM (00h) 00:35:18.653 Deallocate: Supported 00:35:18.653 Deallocated/Unwritten Error: Not Supported 00:35:18.653 Deallocated Read Value: Unknown 00:35:18.653 Deallocate in Write Zeroes: Not Supported 00:35:18.653 Deallocated Guard Field: 0xFFFF 00:35:18.653 Flush: Supported 00:35:18.653 Reservation: Not Supported 00:35:18.653 Namespace Sharing Capabilities: Multiple Controllers 00:35:18.653 Size (in LBAs): 1953525168 (931GiB) 00:35:18.653 Capacity (in LBAs): 1953525168 (931GiB) 00:35:18.653 Utilization (in LBAs): 1953525168 (931GiB) 00:35:18.653 UUID: d4ca88de-6b91-47f7-9585-1db50fb0c229 00:35:18.653 Thin Provisioning: Not Supported 00:35:18.653 Per-NS Atomic Units: Yes 00:35:18.653 Atomic Boundary Size (Normal): 0 00:35:18.653 Atomic Boundary Size (PFail): 0 00:35:18.653 Atomic Boundary Offset: 0 00:35:18.653 NGUID/EUI64 Never Reused: No 00:35:18.653 ANA group ID: 1 00:35:18.653 Namespace Write Protected: No 00:35:18.653 Number of LBA Formats: 1 00:35:18.653 Current LBA Format: LBA Format #00 00:35:18.653 LBA Format #00: Data Size: 512 Metadata Size: 0 00:35:18.653 00:35:18.653 12:03:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:35:18.653 12:03:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:18.653 12:03:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:35:18.653 12:03:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:18.653 12:03:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:35:18.653 12:03:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:18.653 12:03:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:18.653 rmmod nvme_tcp 00:35:18.653 rmmod nvme_fabrics 00:35:18.653 12:03:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:18.653 12:03:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:35:18.653 12:03:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:35:18.653 12:03:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:35:18.653 12:03:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:18.653 12:03:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:18.653 12:03:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:18.653 12:03:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:35:18.653 12:03:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:35:18.653 12:03:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:18.653 12:03:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:35:18.653 12:03:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:18.653 12:03:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:18.653 12:03:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:18.653 12:03:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:18.653 12:03:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:20.558 12:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:20.558 12:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:35:20.558 12:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:20.558 12:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:35:20.558 12:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:20.558 12:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:20.558 12:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:20.559 12:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:20.559 12:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:20.559 12:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:20.818 12:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:21.755 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:21.755 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:21.755 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:21.755 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:21.755 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:21.755 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:21.755 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:21.755 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:21.755 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:21.755 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:21.755 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:21.755 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:21.755 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:21.755 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:22.015 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:22.015 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:22.952 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:22.952 00:35:22.952 real 0m9.762s 00:35:22.952 user 0m2.144s 00:35:22.952 sys 0m3.605s 00:35:22.952 12:03:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:22.952 12:03:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:22.952 ************************************ 00:35:22.952 END TEST nvmf_identify_kernel_target 00:35:22.952 ************************************ 00:35:22.952 12:03:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:22.952 12:03:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:22.952 12:03:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:22.952 12:03:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.952 ************************************ 00:35:22.952 START TEST nvmf_auth_host 00:35:22.952 ************************************ 00:35:22.952 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:22.952 * Looking for test storage... 00:35:22.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:22.952 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:22.952 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:35:22.952 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:23.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:23.211 --rc genhtml_branch_coverage=1 00:35:23.211 --rc genhtml_function_coverage=1 00:35:23.211 --rc genhtml_legend=1 00:35:23.211 --rc geninfo_all_blocks=1 00:35:23.211 --rc geninfo_unexecuted_blocks=1 00:35:23.211 00:35:23.211 ' 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:23.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:23.211 --rc genhtml_branch_coverage=1 00:35:23.211 --rc genhtml_function_coverage=1 00:35:23.211 --rc genhtml_legend=1 00:35:23.211 --rc geninfo_all_blocks=1 00:35:23.211 --rc geninfo_unexecuted_blocks=1 00:35:23.211 00:35:23.211 ' 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:23.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:23.211 --rc genhtml_branch_coverage=1 00:35:23.211 --rc genhtml_function_coverage=1 00:35:23.211 --rc genhtml_legend=1 00:35:23.211 --rc geninfo_all_blocks=1 00:35:23.211 --rc geninfo_unexecuted_blocks=1 00:35:23.211 00:35:23.211 ' 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:23.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:23.211 --rc genhtml_branch_coverage=1 00:35:23.211 --rc genhtml_function_coverage=1 00:35:23.211 --rc genhtml_legend=1 00:35:23.211 --rc geninfo_all_blocks=1 00:35:23.211 --rc geninfo_unexecuted_blocks=1 00:35:23.211 00:35:23.211 ' 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.211 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.212 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.212 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:35:23.212 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.212 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:35:23.212 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:23.212 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:23.212 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:23.212 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:23.212 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:23.212 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:23.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:23.212 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:23.212 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:23.212 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:23.212 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:35:23.212 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:35:23.212 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:35:23.212 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:35:23.212 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:23.212 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:23.212 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:35:23.212 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:35:23.212 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:35:23.212 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:23.212 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:23.212 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:23.212 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:23.212 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:23.212 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:23.212 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:23.212 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:23.212 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:23.212 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:23.212 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:35:23.212 12:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:25.122 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:25.122 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:25.122 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:25.123 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:25.123 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:25.123 12:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:25.123 12:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:25.123 12:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:25.123 12:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:25.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:25.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:35:25.381 00:35:25.381 --- 10.0.0.2 ping statistics --- 00:35:25.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:25.381 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:35:25.381 12:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:25.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:25.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:35:25.381 00:35:25.381 --- 10.0.0.1 ping statistics --- 00:35:25.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:25.381 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:35:25.381 12:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:25.381 12:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:35:25.381 12:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:25.381 12:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:25.381 12:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:25.381 12:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:25.381 12:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:25.381 12:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:25.381 12:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:25.381 12:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:35:25.381 12:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:25.381 12:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:25.381 12:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.381 12:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3119147 00:35:25.381 12:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:35:25.381 12:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3119147 00:35:25.381 12:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3119147 ']' 00:35:25.381 12:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:25.381 12:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:25.381 12:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:25.381 12:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:25.381 12:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.316 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:26.316 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:35:26.316 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:26.316 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:26.316 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.316 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:26.316 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:35:26.316 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:35:26.316 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:26.316 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:26.316 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:26.316 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:35:26.316 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:26.316 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:26.316 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0a62b30281e2b970729608f088838726 00:35:26.316 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:35:26.316 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.lhX 00:35:26.316 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0a62b30281e2b970729608f088838726 0 00:35:26.316 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0a62b30281e2b970729608f088838726 0 00:35:26.316 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:26.316 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:26.316 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0a62b30281e2b970729608f088838726 00:35:26.316 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:35:26.316 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.lhX 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.lhX 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.lhX 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=dfee6f2b8aee4c4df382d16748008c849eeb3ccdfd5e33613f8a8b3ec6eb7145 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.HA3 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key dfee6f2b8aee4c4df382d16748008c849eeb3ccdfd5e33613f8a8b3ec6eb7145 3 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 dfee6f2b8aee4c4df382d16748008c849eeb3ccdfd5e33613f8a8b3ec6eb7145 3 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=dfee6f2b8aee4c4df382d16748008c849eeb3ccdfd5e33613f8a8b3ec6eb7145 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.HA3 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.HA3 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.HA3 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=787ad82ac2826cf239081e6d975af57a6d3e02d41c67fc13 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.BLA 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 787ad82ac2826cf239081e6d975af57a6d3e02d41c67fc13 0 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 787ad82ac2826cf239081e6d975af57a6d3e02d41c67fc13 0 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=787ad82ac2826cf239081e6d975af57a6d3e02d41c67fc13 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.BLA 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.BLA 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.BLA 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1110b8fc8730838ebf599bf6e16c37319d8f09224ac7f529 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.eKW 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1110b8fc8730838ebf599bf6e16c37319d8f09224ac7f529 2 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1110b8fc8730838ebf599bf6e16c37319d8f09224ac7f529 2 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:26.575 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1110b8fc8730838ebf599bf6e16c37319d8f09224ac7f529 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.eKW 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.eKW 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.eKW 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=70bcc07e511434703467edfb76b47258 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.NP4 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 70bcc07e511434703467edfb76b47258 1 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 70bcc07e511434703467edfb76b47258 1 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=70bcc07e511434703467edfb76b47258 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.NP4 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.NP4 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.NP4 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f540a730047dd7bc573541532f325789 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.6b8 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f540a730047dd7bc573541532f325789 1 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f540a730047dd7bc573541532f325789 1 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f540a730047dd7bc573541532f325789 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:35:26.576 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.6b8 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.6b8 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.6b8 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ce5ab43c3c0b89f51e8a401a35661dcc3e38a5cc505711fd 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.qfy 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ce5ab43c3c0b89f51e8a401a35661dcc3e38a5cc505711fd 2 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ce5ab43c3c0b89f51e8a401a35661dcc3e38a5cc505711fd 2 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ce5ab43c3c0b89f51e8a401a35661dcc3e38a5cc505711fd 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.qfy 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.qfy 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.qfy 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5a997bce92eaa763c29ebad7db1c5811 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.gAP 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5a997bce92eaa763c29ebad7db1c5811 0 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5a997bce92eaa763c29ebad7db1c5811 0 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5a997bce92eaa763c29ebad7db1c5811 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.gAP 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.gAP 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.gAP 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3f3154fc2b9b418c20c3e8e43bad9673945383683420f5bd5e5196004dc72ed0 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Cwn 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3f3154fc2b9b418c20c3e8e43bad9673945383683420f5bd5e5196004dc72ed0 3 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3f3154fc2b9b418c20c3e8e43bad9673945383683420f5bd5e5196004dc72ed0 3 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3f3154fc2b9b418c20c3e8e43bad9673945383683420f5bd5e5196004dc72ed0 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Cwn 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Cwn 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Cwn 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3119147 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3119147 ']' 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:26.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:26.834 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:26.835 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.lhX 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.HA3 ]] 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.HA3 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.BLA 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.eKW ]] 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eKW 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.NP4 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.6b8 ]] 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6b8 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.qfy 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.gAP ]] 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.gAP 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Cwn 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.093 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.351 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.351 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:35:27.351 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:35:27.351 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:35:27.351 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:27.351 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:27.351 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:27.351 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:27.351 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:27.351 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:27.351 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:27.351 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:27.351 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:27.351 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:27.351 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:35:27.351 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:35:27.351 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:27.351 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:27.352 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:27.352 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:27.352 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:35:27.352 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:27.352 12:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:27.352 12:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:27.352 12:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:28.284 Waiting for block devices as requested 00:35:28.284 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:28.542 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:28.542 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:28.799 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:28.799 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:28.799 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:28.799 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:29.056 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:29.056 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:29.056 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:29.056 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:29.314 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:29.314 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:29.314 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:29.314 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:29.572 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:29.572 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:29.829 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:29.829 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:29.829 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:29.829 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:29.829 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:29.829 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:29.829 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:29.829 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:29.829 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:29.829 No valid GPT data, bailing 00:35:30.087 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:30.087 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:35:30.087 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:35:30.087 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:30.087 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:30.087 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:30.087 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:30.087 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:30.087 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:35:30.087 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:35:30.087 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:30.087 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:35:30.087 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:30.087 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:35:30.087 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:35:30.087 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:35:30.087 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:30.087 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:30.087 00:35:30.087 Discovery Log Number of Records 2, Generation counter 2 00:35:30.087 =====Discovery Log Entry 0====== 00:35:30.087 trtype: tcp 00:35:30.087 adrfam: ipv4 00:35:30.087 subtype: current discovery subsystem 00:35:30.087 treq: not specified, sq flow control disable supported 00:35:30.087 portid: 1 00:35:30.088 trsvcid: 4420 00:35:30.088 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:30.088 traddr: 10.0.0.1 00:35:30.088 eflags: none 00:35:30.088 sectype: none 00:35:30.088 =====Discovery Log Entry 1====== 00:35:30.088 trtype: tcp 00:35:30.088 adrfam: ipv4 00:35:30.088 subtype: nvme subsystem 00:35:30.088 treq: not specified, sq flow control disable supported 00:35:30.088 portid: 1 00:35:30.088 trsvcid: 4420 00:35:30.088 subnqn: nqn.2024-02.io.spdk:cnode0 00:35:30.088 traddr: 10.0.0.1 00:35:30.088 eflags: none 00:35:30.088 sectype: none 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzg3YWQ4MmFjMjgyNmNmMjM5MDgxZTZkOTc1YWY1N2E2ZDNlMDJkNDFjNjdmYzEzV24L6Q==: 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzg3YWQ4MmFjMjgyNmNmMjM5MDgxZTZkOTc1YWY1N2E2ZDNlMDJkNDFjNjdmYzEzV24L6Q==: 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: ]] 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.088 12:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.346 nvme0n1 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGE2MmIzMDI4MWUyYjk3MDcyOTYwOGYwODg4Mzg3MjZJ2cL2: 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGE2MmIzMDI4MWUyYjk3MDcyOTYwOGYwODg4Mzg3MjZJ2cL2: 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: ]] 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:30.346 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:30.347 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:30.347 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:30.347 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:30.347 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:30.347 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:30.347 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:30.347 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:30.347 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:30.347 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:30.347 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:30.347 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.347 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.605 nvme0n1 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzg3YWQ4MmFjMjgyNmNmMjM5MDgxZTZkOTc1YWY1N2E2ZDNlMDJkNDFjNjdmYzEzV24L6Q==: 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzg3YWQ4MmFjMjgyNmNmMjM5MDgxZTZkOTc1YWY1N2E2ZDNlMDJkNDFjNjdmYzEzV24L6Q==: 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: ]] 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.605 nvme0n1 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:30.605 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzBiY2MwN2U1MTE0MzQ3MDM0NjdlZGZiNzZiNDcyNThNwJ0P: 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzBiY2MwN2U1MTE0MzQ3MDM0NjdlZGZiNzZiNDcyNThNwJ0P: 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: ]] 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.864 nvme0n1 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.864 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2U1YWI0M2MzYzBiODlmNTFlOGE0MDFhMzU2NjFkY2MzZTM4YTVjYzUwNTcxMWZkdH91EQ==: 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2U1YWI0M2MzYzBiODlmNTFlOGE0MDFhMzU2NjFkY2MzZTM4YTVjYzUwNTcxMWZkdH91EQ==: 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: ]] 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.123 nvme0n1 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2YzMTU0ZmMyYjliNDE4YzIwYzNlOGU0M2JhZDk2NzM5NDUzODM2ODM0MjBmNWJkNWU1MTk2MDA0ZGM3MmVkMK0NDKA=: 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:31.123 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:31.124 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:31.124 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2YzMTU0ZmMyYjliNDE4YzIwYzNlOGU0M2JhZDk2NzM5NDUzODM2ODM0MjBmNWJkNWU1MTk2MDA0ZGM3MmVkMK0NDKA=: 00:35:31.124 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:31.124 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:35:31.124 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.124 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:31.124 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:31.124 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:31.124 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.124 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:31.124 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.124 12:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.124 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.124 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.124 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:31.382 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:31.382 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:31.382 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.382 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.382 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:31.382 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.382 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:31.382 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:31.382 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:31.382 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:31.382 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.382 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.382 nvme0n1 00:35:31.382 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.382 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.382 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:31.382 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.382 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.382 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGE2MmIzMDI4MWUyYjk3MDcyOTYwOGYwODg4Mzg3MjZJ2cL2: 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGE2MmIzMDI4MWUyYjk3MDcyOTYwOGYwODg4Mzg3MjZJ2cL2: 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: ]] 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.383 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.641 nvme0n1 00:35:31.641 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.641 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.641 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:31.641 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.641 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.641 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.641 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.641 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:31.641 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.641 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.641 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.641 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.641 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:35:31.641 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.641 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:31.641 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:31.641 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:31.641 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzg3YWQ4MmFjMjgyNmNmMjM5MDgxZTZkOTc1YWY1N2E2ZDNlMDJkNDFjNjdmYzEzV24L6Q==: 00:35:31.641 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: 00:35:31.641 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:31.641 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:31.641 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzg3YWQ4MmFjMjgyNmNmMjM5MDgxZTZkOTc1YWY1N2E2ZDNlMDJkNDFjNjdmYzEzV24L6Q==: 00:35:31.642 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: ]] 00:35:31.642 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: 00:35:31.642 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:35:31.642 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.642 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:31.642 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:31.642 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:31.642 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.642 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:31.642 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.642 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.642 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.642 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.642 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:31.642 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:31.642 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:31.642 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.642 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.642 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:31.642 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.642 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:31.642 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:31.642 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:31.642 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:31.642 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.642 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.900 nvme0n1 00:35:31.900 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.900 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.900 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.900 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzBiY2MwN2U1MTE0MzQ3MDM0NjdlZGZiNzZiNDcyNThNwJ0P: 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzBiY2MwN2U1MTE0MzQ3MDM0NjdlZGZiNzZiNDcyNThNwJ0P: 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: ]] 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.901 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.160 nvme0n1 00:35:32.160 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.160 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:32.160 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.160 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:32.160 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.160 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.160 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:32.160 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:32.160 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.160 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.160 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.160 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:32.160 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:35:32.160 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:32.160 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:32.160 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:32.160 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:32.160 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2U1YWI0M2MzYzBiODlmNTFlOGE0MDFhMzU2NjFkY2MzZTM4YTVjYzUwNTcxMWZkdH91EQ==: 00:35:32.160 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: 00:35:32.160 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:32.160 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:32.160 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2U1YWI0M2MzYzBiODlmNTFlOGE0MDFhMzU2NjFkY2MzZTM4YTVjYzUwNTcxMWZkdH91EQ==: 00:35:32.160 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: ]] 00:35:32.160 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: 00:35:32.160 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:35:32.160 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:32.160 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:32.160 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:32.160 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:32.160 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:32.160 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:32.160 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.160 12:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.160 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.160 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:32.160 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:32.160 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:32.160 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:32.160 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:32.160 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:32.160 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:32.160 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:32.160 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:32.160 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:32.160 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:32.160 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:32.160 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.160 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.419 nvme0n1 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2YzMTU0ZmMyYjliNDE4YzIwYzNlOGU0M2JhZDk2NzM5NDUzODM2ODM0MjBmNWJkNWU1MTk2MDA0ZGM3MmVkMK0NDKA=: 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2YzMTU0ZmMyYjliNDE4YzIwYzNlOGU0M2JhZDk2NzM5NDUzODM2ODM0MjBmNWJkNWU1MTk2MDA0ZGM3MmVkMK0NDKA=: 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.419 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.678 nvme0n1 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGE2MmIzMDI4MWUyYjk3MDcyOTYwOGYwODg4Mzg3MjZJ2cL2: 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGE2MmIzMDI4MWUyYjk3MDcyOTYwOGYwODg4Mzg3MjZJ2cL2: 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: ]] 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.678 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.244 nvme0n1 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzg3YWQ4MmFjMjgyNmNmMjM5MDgxZTZkOTc1YWY1N2E2ZDNlMDJkNDFjNjdmYzEzV24L6Q==: 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzg3YWQ4MmFjMjgyNmNmMjM5MDgxZTZkOTc1YWY1N2E2ZDNlMDJkNDFjNjdmYzEzV24L6Q==: 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: ]] 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.244 12:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.502 nvme0n1 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzBiY2MwN2U1MTE0MzQ3MDM0NjdlZGZiNzZiNDcyNThNwJ0P: 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzBiY2MwN2U1MTE0MzQ3MDM0NjdlZGZiNzZiNDcyNThNwJ0P: 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: ]] 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.502 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.759 nvme0n1 00:35:33.759 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.759 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:33.759 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.759 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:33.759 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.759 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.759 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:33.759 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:33.759 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.759 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.759 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.759 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:33.759 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:35:33.759 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:33.759 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:33.759 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:33.759 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:33.759 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2U1YWI0M2MzYzBiODlmNTFlOGE0MDFhMzU2NjFkY2MzZTM4YTVjYzUwNTcxMWZkdH91EQ==: 00:35:33.760 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: 00:35:33.760 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:33.760 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:33.760 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2U1YWI0M2MzYzBiODlmNTFlOGE0MDFhMzU2NjFkY2MzZTM4YTVjYzUwNTcxMWZkdH91EQ==: 00:35:33.760 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: ]] 00:35:33.760 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: 00:35:33.760 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:35:33.760 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:33.760 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:33.760 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:33.760 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:33.760 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:33.760 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:33.760 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.760 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.018 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.018 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.018 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:34.018 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:34.018 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:34.018 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.018 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.018 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:34.018 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.018 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:34.018 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:34.018 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:34.018 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:34.018 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.018 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.277 nvme0n1 00:35:34.277 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.277 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.277 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.277 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.277 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.277 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.277 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.277 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.277 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.277 12:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.277 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.277 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.277 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:35:34.277 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.277 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:34.277 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:34.277 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:34.277 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2YzMTU0ZmMyYjliNDE4YzIwYzNlOGU0M2JhZDk2NzM5NDUzODM2ODM0MjBmNWJkNWU1MTk2MDA0ZGM3MmVkMK0NDKA=: 00:35:34.277 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:34.277 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:34.277 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:34.277 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2YzMTU0ZmMyYjliNDE4YzIwYzNlOGU0M2JhZDk2NzM5NDUzODM2ODM0MjBmNWJkNWU1MTk2MDA0ZGM3MmVkMK0NDKA=: 00:35:34.277 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:34.277 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:35:34.277 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.277 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:34.277 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:34.277 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:34.277 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.277 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:34.277 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.277 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.277 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.277 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.277 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:34.277 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:34.277 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:34.277 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.277 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.277 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:34.277 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.277 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:34.277 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:34.277 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:34.277 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:34.277 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.277 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.536 nvme0n1 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGE2MmIzMDI4MWUyYjk3MDcyOTYwOGYwODg4Mzg3MjZJ2cL2: 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGE2MmIzMDI4MWUyYjk3MDcyOTYwOGYwODg4Mzg3MjZJ2cL2: 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: ]] 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.536 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.102 nvme0n1 00:35:35.102 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.102 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.102 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.102 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.102 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.102 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.102 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.102 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.102 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.102 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.102 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.102 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.102 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:35:35.102 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.102 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:35.102 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:35.102 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:35.102 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzg3YWQ4MmFjMjgyNmNmMjM5MDgxZTZkOTc1YWY1N2E2ZDNlMDJkNDFjNjdmYzEzV24L6Q==: 00:35:35.102 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: 00:35:35.102 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:35.102 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:35.102 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzg3YWQ4MmFjMjgyNmNmMjM5MDgxZTZkOTc1YWY1N2E2ZDNlMDJkNDFjNjdmYzEzV24L6Q==: 00:35:35.102 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: ]] 00:35:35.102 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: 00:35:35.102 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:35:35.102 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.102 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:35.102 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:35.102 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:35.102 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.360 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:35.360 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.360 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.360 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.360 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.360 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:35.360 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:35.360 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:35.360 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.360 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.360 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:35.360 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.360 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:35.360 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:35.360 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:35.360 12:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:35.360 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.360 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.933 nvme0n1 00:35:35.933 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.933 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.933 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.933 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.933 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.933 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.933 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.933 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.933 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.933 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.933 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.933 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.933 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:35:35.933 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.933 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:35.933 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:35.933 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:35.933 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzBiY2MwN2U1MTE0MzQ3MDM0NjdlZGZiNzZiNDcyNThNwJ0P: 00:35:35.933 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: 00:35:35.934 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:35.934 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:35.934 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzBiY2MwN2U1MTE0MzQ3MDM0NjdlZGZiNzZiNDcyNThNwJ0P: 00:35:35.934 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: ]] 00:35:35.934 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: 00:35:35.934 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:35:35.934 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.934 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:35.934 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:35.934 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:35.934 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.934 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:35.934 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.934 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.934 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.934 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.934 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:35.934 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:35.934 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:35.934 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.934 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.934 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:35.934 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.934 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:35.934 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:35.934 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:35.934 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:35.934 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.934 12:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.499 nvme0n1 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2U1YWI0M2MzYzBiODlmNTFlOGE0MDFhMzU2NjFkY2MzZTM4YTVjYzUwNTcxMWZkdH91EQ==: 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2U1YWI0M2MzYzBiODlmNTFlOGE0MDFhMzU2NjFkY2MzZTM4YTVjYzUwNTcxMWZkdH91EQ==: 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: ]] 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.499 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.064 nvme0n1 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2YzMTU0ZmMyYjliNDE4YzIwYzNlOGU0M2JhZDk2NzM5NDUzODM2ODM0MjBmNWJkNWU1MTk2MDA0ZGM3MmVkMK0NDKA=: 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2YzMTU0ZmMyYjliNDE4YzIwYzNlOGU0M2JhZDk2NzM5NDUzODM2ODM0MjBmNWJkNWU1MTk2MDA0ZGM3MmVkMK0NDKA=: 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.064 12:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.628 nvme0n1 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGE2MmIzMDI4MWUyYjk3MDcyOTYwOGYwODg4Mzg3MjZJ2cL2: 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGE2MmIzMDI4MWUyYjk3MDcyOTYwOGYwODg4Mzg3MjZJ2cL2: 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: ]] 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.628 12:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.559 nvme0n1 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzg3YWQ4MmFjMjgyNmNmMjM5MDgxZTZkOTc1YWY1N2E2ZDNlMDJkNDFjNjdmYzEzV24L6Q==: 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzg3YWQ4MmFjMjgyNmNmMjM5MDgxZTZkOTc1YWY1N2E2ZDNlMDJkNDFjNjdmYzEzV24L6Q==: 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: ]] 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.559 12:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.492 nvme0n1 00:35:39.492 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.492 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.492 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.492 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.492 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.492 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.492 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.492 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.492 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.492 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.750 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.750 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.750 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:35:39.750 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.750 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:39.750 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:39.750 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:39.750 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzBiY2MwN2U1MTE0MzQ3MDM0NjdlZGZiNzZiNDcyNThNwJ0P: 00:35:39.750 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: 00:35:39.750 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:39.750 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:39.750 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzBiY2MwN2U1MTE0MzQ3MDM0NjdlZGZiNzZiNDcyNThNwJ0P: 00:35:39.750 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: ]] 00:35:39.750 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: 00:35:39.750 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:35:39.750 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.750 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:39.750 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:39.750 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:39.751 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.751 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:39.751 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.751 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.751 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.751 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.751 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:39.751 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:39.751 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:39.751 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.751 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.751 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:39.751 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:39.751 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:39.751 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:39.751 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:39.751 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:39.751 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.751 12:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.684 nvme0n1 00:35:40.684 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.684 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.684 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.684 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.684 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.684 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.684 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.684 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.684 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.684 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.684 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.684 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.684 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:35:40.684 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.684 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:40.684 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:40.684 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:40.684 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2U1YWI0M2MzYzBiODlmNTFlOGE0MDFhMzU2NjFkY2MzZTM4YTVjYzUwNTcxMWZkdH91EQ==: 00:35:40.684 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: 00:35:40.684 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:40.684 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:40.684 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2U1YWI0M2MzYzBiODlmNTFlOGE0MDFhMzU2NjFkY2MzZTM4YTVjYzUwNTcxMWZkdH91EQ==: 00:35:40.684 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: ]] 00:35:40.685 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: 00:35:40.685 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:35:40.685 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.685 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:40.685 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:40.685 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:40.685 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.685 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:40.685 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.685 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.685 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.685 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.685 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:40.685 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:40.685 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:40.685 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.685 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.685 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:40.685 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.685 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:40.685 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:40.685 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:40.685 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:40.685 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.685 12:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.617 nvme0n1 00:35:41.617 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.617 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:41.617 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.617 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.617 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:41.617 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.617 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:41.617 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:41.617 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.617 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.617 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.617 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:41.617 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:35:41.617 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:41.617 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:41.617 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:41.617 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:41.617 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2YzMTU0ZmMyYjliNDE4YzIwYzNlOGU0M2JhZDk2NzM5NDUzODM2ODM0MjBmNWJkNWU1MTk2MDA0ZGM3MmVkMK0NDKA=: 00:35:41.617 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:41.617 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:41.617 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:41.617 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2YzMTU0ZmMyYjliNDE4YzIwYzNlOGU0M2JhZDk2NzM5NDUzODM2ODM0MjBmNWJkNWU1MTk2MDA0ZGM3MmVkMK0NDKA=: 00:35:41.617 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:41.617 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:35:41.617 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:41.617 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:41.617 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:41.617 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:41.617 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:41.617 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:41.617 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.617 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.875 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.875 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:41.875 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:41.875 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:41.875 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:41.875 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:41.875 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:41.875 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:41.875 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:41.875 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:41.875 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:41.875 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:41.875 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:41.875 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.875 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.809 nvme0n1 00:35:42.809 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.809 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:42.809 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.809 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.809 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:42.809 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.809 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.809 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGE2MmIzMDI4MWUyYjk3MDcyOTYwOGYwODg4Mzg3MjZJ2cL2: 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGE2MmIzMDI4MWUyYjk3MDcyOTYwOGYwODg4Mzg3MjZJ2cL2: 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: ]] 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.810 nvme0n1 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzg3YWQ4MmFjMjgyNmNmMjM5MDgxZTZkOTc1YWY1N2E2ZDNlMDJkNDFjNjdmYzEzV24L6Q==: 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzg3YWQ4MmFjMjgyNmNmMjM5MDgxZTZkOTc1YWY1N2E2ZDNlMDJkNDFjNjdmYzEzV24L6Q==: 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: ]] 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.810 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.069 nvme0n1 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzBiY2MwN2U1MTE0MzQ3MDM0NjdlZGZiNzZiNDcyNThNwJ0P: 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzBiY2MwN2U1MTE0MzQ3MDM0NjdlZGZiNzZiNDcyNThNwJ0P: 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: ]] 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.069 12:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.327 nvme0n1 00:35:43.327 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.327 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.327 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.327 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.327 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.327 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.327 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.327 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.327 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.327 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.327 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.327 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.327 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:35:43.327 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.327 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:43.327 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:43.327 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:43.327 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2U1YWI0M2MzYzBiODlmNTFlOGE0MDFhMzU2NjFkY2MzZTM4YTVjYzUwNTcxMWZkdH91EQ==: 00:35:43.327 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: 00:35:43.327 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:43.327 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:43.327 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2U1YWI0M2MzYzBiODlmNTFlOGE0MDFhMzU2NjFkY2MzZTM4YTVjYzUwNTcxMWZkdH91EQ==: 00:35:43.327 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: ]] 00:35:43.327 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: 00:35:43.327 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:35:43.327 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.327 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:43.327 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:43.327 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:43.327 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.327 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:43.327 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.328 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.328 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.328 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.328 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:43.328 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:43.328 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:43.328 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.328 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.328 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:43.328 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.328 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:43.328 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:43.328 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:43.328 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:43.328 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.328 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.586 nvme0n1 00:35:43.586 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.586 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.586 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.586 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.586 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.586 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.586 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.586 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.586 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.586 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.586 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.586 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.586 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:35:43.586 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.586 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:43.586 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:43.586 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:43.586 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2YzMTU0ZmMyYjliNDE4YzIwYzNlOGU0M2JhZDk2NzM5NDUzODM2ODM0MjBmNWJkNWU1MTk2MDA0ZGM3MmVkMK0NDKA=: 00:35:43.586 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:43.586 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:43.586 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:43.586 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2YzMTU0ZmMyYjliNDE4YzIwYzNlOGU0M2JhZDk2NzM5NDUzODM2ODM0MjBmNWJkNWU1MTk2MDA0ZGM3MmVkMK0NDKA=: 00:35:43.586 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:43.586 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:35:43.587 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.587 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:43.587 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:43.587 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:43.587 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.587 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:43.587 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.587 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.587 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.587 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.587 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:43.587 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:43.587 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:43.587 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.587 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.587 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:43.587 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.587 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:43.587 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:43.587 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:43.587 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:43.587 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.587 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.886 nvme0n1 00:35:43.886 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.886 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.886 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.886 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.886 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.886 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.886 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.886 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.886 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.886 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.886 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.886 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:43.886 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.886 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:35:43.886 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.886 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:43.886 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:43.886 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:43.887 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGE2MmIzMDI4MWUyYjk3MDcyOTYwOGYwODg4Mzg3MjZJ2cL2: 00:35:43.887 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: 00:35:43.887 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:43.887 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:43.887 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGE2MmIzMDI4MWUyYjk3MDcyOTYwOGYwODg4Mzg3MjZJ2cL2: 00:35:43.887 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: ]] 00:35:43.887 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: 00:35:43.887 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:35:43.887 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.887 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:43.887 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:43.887 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:43.887 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.887 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:43.887 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.887 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.887 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.887 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.887 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:43.887 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:43.887 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:43.887 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.887 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.887 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:43.887 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.887 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:43.887 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:43.887 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:43.887 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:43.887 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.887 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.887 nvme0n1 00:35:43.887 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.887 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.887 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.887 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.887 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.173 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzg3YWQ4MmFjMjgyNmNmMjM5MDgxZTZkOTc1YWY1N2E2ZDNlMDJkNDFjNjdmYzEzV24L6Q==: 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzg3YWQ4MmFjMjgyNmNmMjM5MDgxZTZkOTc1YWY1N2E2ZDNlMDJkNDFjNjdmYzEzV24L6Q==: 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: ]] 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.174 12:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.174 nvme0n1 00:35:44.174 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.174 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.174 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.174 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.174 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.174 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.174 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.174 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.174 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.174 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzBiY2MwN2U1MTE0MzQ3MDM0NjdlZGZiNzZiNDcyNThNwJ0P: 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzBiY2MwN2U1MTE0MzQ3MDM0NjdlZGZiNzZiNDcyNThNwJ0P: 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: ]] 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.432 nvme0n1 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.432 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2U1YWI0M2MzYzBiODlmNTFlOGE0MDFhMzU2NjFkY2MzZTM4YTVjYzUwNTcxMWZkdH91EQ==: 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2U1YWI0M2MzYzBiODlmNTFlOGE0MDFhMzU2NjFkY2MzZTM4YTVjYzUwNTcxMWZkdH91EQ==: 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: ]] 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.691 nvme0n1 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.691 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.692 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.950 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.950 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.950 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.950 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.950 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.950 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:44.950 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:35:44.950 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:44.950 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:44.950 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:44.950 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:44.950 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2YzMTU0ZmMyYjliNDE4YzIwYzNlOGU0M2JhZDk2NzM5NDUzODM2ODM0MjBmNWJkNWU1MTk2MDA0ZGM3MmVkMK0NDKA=: 00:35:44.950 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:44.950 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:44.950 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:44.951 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2YzMTU0ZmMyYjliNDE4YzIwYzNlOGU0M2JhZDk2NzM5NDUzODM2ODM0MjBmNWJkNWU1MTk2MDA0ZGM3MmVkMK0NDKA=: 00:35:44.951 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:44.951 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:35:44.951 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.951 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:44.951 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:44.951 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:44.951 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.951 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:44.951 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.951 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.951 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.951 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.951 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:44.951 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:44.951 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:44.951 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.951 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.951 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:44.951 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:44.951 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:44.951 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:44.951 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:44.951 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:44.951 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.951 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.951 nvme0n1 00:35:44.951 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.951 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.951 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.951 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.951 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.951 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.951 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.951 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.951 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.951 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGE2MmIzMDI4MWUyYjk3MDcyOTYwOGYwODg4Mzg3MjZJ2cL2: 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGE2MmIzMDI4MWUyYjk3MDcyOTYwOGYwODg4Mzg3MjZJ2cL2: 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: ]] 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.211 12:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.472 nvme0n1 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzg3YWQ4MmFjMjgyNmNmMjM5MDgxZTZkOTc1YWY1N2E2ZDNlMDJkNDFjNjdmYzEzV24L6Q==: 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzg3YWQ4MmFjMjgyNmNmMjM5MDgxZTZkOTc1YWY1N2E2ZDNlMDJkNDFjNjdmYzEzV24L6Q==: 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: ]] 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.472 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.731 nvme0n1 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzBiY2MwN2U1MTE0MzQ3MDM0NjdlZGZiNzZiNDcyNThNwJ0P: 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzBiY2MwN2U1MTE0MzQ3MDM0NjdlZGZiNzZiNDcyNThNwJ0P: 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: ]] 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:45.731 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:45.732 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:45.732 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:45.732 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.732 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.991 nvme0n1 00:35:45.991 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.991 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.991 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.991 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.991 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.991 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.991 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.991 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.991 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.991 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.991 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.991 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.991 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:35:45.991 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.991 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:45.991 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:45.991 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:45.991 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2U1YWI0M2MzYzBiODlmNTFlOGE0MDFhMzU2NjFkY2MzZTM4YTVjYzUwNTcxMWZkdH91EQ==: 00:35:45.991 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: 00:35:45.991 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:45.991 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:45.992 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2U1YWI0M2MzYzBiODlmNTFlOGE0MDFhMzU2NjFkY2MzZTM4YTVjYzUwNTcxMWZkdH91EQ==: 00:35:45.992 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: ]] 00:35:45.992 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: 00:35:45.992 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:35:45.992 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.992 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:45.992 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:45.992 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:45.992 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.992 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:45.992 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.992 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.252 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.252 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.252 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:46.252 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:46.252 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:46.252 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.252 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.252 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:46.252 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:46.252 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:46.252 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:46.252 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:46.252 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:46.252 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.252 12:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.512 nvme0n1 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2YzMTU0ZmMyYjliNDE4YzIwYzNlOGU0M2JhZDk2NzM5NDUzODM2ODM0MjBmNWJkNWU1MTk2MDA0ZGM3MmVkMK0NDKA=: 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2YzMTU0ZmMyYjliNDE4YzIwYzNlOGU0M2JhZDk2NzM5NDUzODM2ODM0MjBmNWJkNWU1MTk2MDA0ZGM3MmVkMK0NDKA=: 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.512 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.771 nvme0n1 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGE2MmIzMDI4MWUyYjk3MDcyOTYwOGYwODg4Mzg3MjZJ2cL2: 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGE2MmIzMDI4MWUyYjk3MDcyOTYwOGYwODg4Mzg3MjZJ2cL2: 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: ]] 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.771 12:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.337 nvme0n1 00:35:47.337 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.337 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.337 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.337 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.337 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.337 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.337 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.337 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.337 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.337 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.337 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.337 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.337 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:35:47.337 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.337 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:47.337 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:47.337 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:47.337 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzg3YWQ4MmFjMjgyNmNmMjM5MDgxZTZkOTc1YWY1N2E2ZDNlMDJkNDFjNjdmYzEzV24L6Q==: 00:35:47.337 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: 00:35:47.337 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:47.337 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:47.337 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzg3YWQ4MmFjMjgyNmNmMjM5MDgxZTZkOTc1YWY1N2E2ZDNlMDJkNDFjNjdmYzEzV24L6Q==: 00:35:47.337 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: ]] 00:35:47.337 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: 00:35:47.337 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:35:47.337 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.337 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:47.337 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:47.337 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:47.338 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.338 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:47.338 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.338 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.595 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.595 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.595 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:47.595 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:47.595 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:47.595 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.595 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.595 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:47.595 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.595 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:47.595 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:47.595 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:47.595 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:47.595 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.595 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.162 nvme0n1 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzBiY2MwN2U1MTE0MzQ3MDM0NjdlZGZiNzZiNDcyNThNwJ0P: 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzBiY2MwN2U1MTE0MzQ3MDM0NjdlZGZiNzZiNDcyNThNwJ0P: 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: ]] 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.162 12:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.732 nvme0n1 00:35:48.732 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.732 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.732 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.732 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.732 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.732 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.732 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2U1YWI0M2MzYzBiODlmNTFlOGE0MDFhMzU2NjFkY2MzZTM4YTVjYzUwNTcxMWZkdH91EQ==: 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2U1YWI0M2MzYzBiODlmNTFlOGE0MDFhMzU2NjFkY2MzZTM4YTVjYzUwNTcxMWZkdH91EQ==: 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: ]] 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.733 12:04:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.301 nvme0n1 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2YzMTU0ZmMyYjliNDE4YzIwYzNlOGU0M2JhZDk2NzM5NDUzODM2ODM0MjBmNWJkNWU1MTk2MDA0ZGM3MmVkMK0NDKA=: 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2YzMTU0ZmMyYjliNDE4YzIwYzNlOGU0M2JhZDk2NzM5NDUzODM2ODM0MjBmNWJkNWU1MTk2MDA0ZGM3MmVkMK0NDKA=: 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.301 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.866 nvme0n1 00:35:49.866 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.866 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.866 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.866 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.866 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.866 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.866 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.866 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGE2MmIzMDI4MWUyYjk3MDcyOTYwOGYwODg4Mzg3MjZJ2cL2: 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGE2MmIzMDI4MWUyYjk3MDcyOTYwOGYwODg4Mzg3MjZJ2cL2: 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: ]] 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.867 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.244 nvme0n1 00:35:51.244 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.244 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:51.244 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.244 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.244 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:51.244 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.244 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:51.244 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:51.244 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.244 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.244 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.244 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:51.244 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:35:51.244 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:51.244 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:51.244 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:51.244 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:51.244 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzg3YWQ4MmFjMjgyNmNmMjM5MDgxZTZkOTc1YWY1N2E2ZDNlMDJkNDFjNjdmYzEzV24L6Q==: 00:35:51.244 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: 00:35:51.244 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:51.244 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:51.244 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzg3YWQ4MmFjMjgyNmNmMjM5MDgxZTZkOTc1YWY1N2E2ZDNlMDJkNDFjNjdmYzEzV24L6Q==: 00:35:51.245 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: ]] 00:35:51.245 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: 00:35:51.245 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:35:51.245 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:51.245 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:51.245 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:51.245 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:51.245 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:51.245 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:51.245 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.245 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.245 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.245 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:51.245 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:51.245 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:51.245 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:51.245 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:51.245 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:51.245 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:51.245 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:51.245 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:51.245 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:51.245 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:51.245 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:51.245 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.245 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.178 nvme0n1 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzBiY2MwN2U1MTE0MzQ3MDM0NjdlZGZiNzZiNDcyNThNwJ0P: 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzBiY2MwN2U1MTE0MzQ3MDM0NjdlZGZiNzZiNDcyNThNwJ0P: 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: ]] 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.178 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.118 nvme0n1 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2U1YWI0M2MzYzBiODlmNTFlOGE0MDFhMzU2NjFkY2MzZTM4YTVjYzUwNTcxMWZkdH91EQ==: 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2U1YWI0M2MzYzBiODlmNTFlOGE0MDFhMzU2NjFkY2MzZTM4YTVjYzUwNTcxMWZkdH91EQ==: 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: ]] 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.118 12:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.054 nvme0n1 00:35:54.054 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.054 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.054 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.054 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.054 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.054 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.054 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.054 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:54.054 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.054 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.054 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.054 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:54.054 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:35:54.054 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.054 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:54.054 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:54.054 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:54.054 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2YzMTU0ZmMyYjliNDE4YzIwYzNlOGU0M2JhZDk2NzM5NDUzODM2ODM0MjBmNWJkNWU1MTk2MDA0ZGM3MmVkMK0NDKA=: 00:35:54.054 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:54.054 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:54.054 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:54.054 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2YzMTU0ZmMyYjliNDE4YzIwYzNlOGU0M2JhZDk2NzM5NDUzODM2ODM0MjBmNWJkNWU1MTk2MDA0ZGM3MmVkMK0NDKA=: 00:35:54.054 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:54.054 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:35:54.054 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.054 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:54.055 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:54.055 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:54.055 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.055 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:54.055 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.055 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.055 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.055 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.055 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:54.055 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:54.055 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:54.055 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:54.055 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:54.055 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:54.055 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:54.055 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:54.055 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:54.055 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:54.055 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:54.055 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.055 12:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.994 nvme0n1 00:35:54.995 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.995 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.995 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.995 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.995 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.995 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGE2MmIzMDI4MWUyYjk3MDcyOTYwOGYwODg4Mzg3MjZJ2cL2: 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGE2MmIzMDI4MWUyYjk3MDcyOTYwOGYwODg4Mzg3MjZJ2cL2: 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: ]] 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.253 12:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.253 nvme0n1 00:35:55.253 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.253 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.253 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.253 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.253 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.253 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.253 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.253 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.253 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.253 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzg3YWQ4MmFjMjgyNmNmMjM5MDgxZTZkOTc1YWY1N2E2ZDNlMDJkNDFjNjdmYzEzV24L6Q==: 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzg3YWQ4MmFjMjgyNmNmMjM5MDgxZTZkOTc1YWY1N2E2ZDNlMDJkNDFjNjdmYzEzV24L6Q==: 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: ]] 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.513 nvme0n1 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzBiY2MwN2U1MTE0MzQ3MDM0NjdlZGZiNzZiNDcyNThNwJ0P: 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzBiY2MwN2U1MTE0MzQ3MDM0NjdlZGZiNzZiNDcyNThNwJ0P: 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: ]] 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: 00:35:55.513 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:35:55.514 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.514 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:55.514 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:55.514 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:55.514 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.514 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:55.514 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.514 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.514 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.514 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.514 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:55.514 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:55.514 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:55.514 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.514 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.514 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:55.514 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:55.774 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:55.774 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:55.774 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:55.774 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:55.774 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.774 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.774 nvme0n1 00:35:55.774 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.774 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.774 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.774 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.774 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.774 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.774 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.774 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.774 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.774 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.774 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.774 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.774 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:35:55.774 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.774 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:55.774 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:55.774 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:55.774 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2U1YWI0M2MzYzBiODlmNTFlOGE0MDFhMzU2NjFkY2MzZTM4YTVjYzUwNTcxMWZkdH91EQ==: 00:35:55.774 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: 00:35:55.774 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:55.774 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:55.774 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2U1YWI0M2MzYzBiODlmNTFlOGE0MDFhMzU2NjFkY2MzZTM4YTVjYzUwNTcxMWZkdH91EQ==: 00:35:55.775 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: ]] 00:35:55.775 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: 00:35:55.775 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:35:55.775 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.775 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:55.775 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:55.775 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:55.775 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.775 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:55.775 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.775 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.775 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.775 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.775 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:55.775 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:55.775 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:55.775 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.775 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.775 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:55.775 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:55.775 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:55.775 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:55.775 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:55.775 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:55.775 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.775 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.034 nvme0n1 00:35:56.034 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.034 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.034 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.034 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.034 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.034 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.034 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.034 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.034 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.034 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.034 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.034 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.034 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:35:56.034 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.034 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:56.034 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:56.035 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:56.035 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2YzMTU0ZmMyYjliNDE4YzIwYzNlOGU0M2JhZDk2NzM5NDUzODM2ODM0MjBmNWJkNWU1MTk2MDA0ZGM3MmVkMK0NDKA=: 00:35:56.035 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:56.035 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:56.035 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:56.035 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2YzMTU0ZmMyYjliNDE4YzIwYzNlOGU0M2JhZDk2NzM5NDUzODM2ODM0MjBmNWJkNWU1MTk2MDA0ZGM3MmVkMK0NDKA=: 00:35:56.035 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:56.035 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:35:56.035 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.035 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:56.035 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:56.035 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:56.035 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.035 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:56.035 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.035 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.035 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.035 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.035 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:56.035 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:56.035 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:56.035 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.035 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.035 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:56.035 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.035 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:56.035 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:56.035 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:56.035 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:56.035 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.035 12:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.295 nvme0n1 00:35:56.295 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.295 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.295 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.295 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.295 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.295 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.295 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.295 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.295 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.295 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.295 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.295 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:56.295 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.295 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:35:56.295 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.295 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:56.295 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:56.295 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:56.295 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGE2MmIzMDI4MWUyYjk3MDcyOTYwOGYwODg4Mzg3MjZJ2cL2: 00:35:56.295 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: 00:35:56.295 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:56.295 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:56.295 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGE2MmIzMDI4MWUyYjk3MDcyOTYwOGYwODg4Mzg3MjZJ2cL2: 00:35:56.295 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: ]] 00:35:56.295 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: 00:35:56.295 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:35:56.295 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.295 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:56.295 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:56.295 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:56.295 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.295 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:56.296 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.296 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.296 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.296 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.296 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:56.296 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:56.296 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:56.296 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.296 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.296 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:56.296 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.296 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:56.296 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:56.296 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:56.296 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:56.296 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.296 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.556 nvme0n1 00:35:56.556 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.556 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.556 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.556 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.556 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.556 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.556 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.556 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.556 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.556 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.556 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.556 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.556 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:35:56.557 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.557 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:56.557 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:56.557 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:56.557 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzg3YWQ4MmFjMjgyNmNmMjM5MDgxZTZkOTc1YWY1N2E2ZDNlMDJkNDFjNjdmYzEzV24L6Q==: 00:35:56.557 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: 00:35:56.557 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:56.557 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:56.557 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzg3YWQ4MmFjMjgyNmNmMjM5MDgxZTZkOTc1YWY1N2E2ZDNlMDJkNDFjNjdmYzEzV24L6Q==: 00:35:56.557 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: ]] 00:35:56.557 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: 00:35:56.557 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:35:56.557 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.557 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:56.557 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:56.557 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:56.557 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.557 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:56.557 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.557 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.557 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.557 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.557 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:56.557 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:56.557 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:56.557 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.557 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.557 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:56.557 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.557 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:56.557 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:56.557 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:56.557 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:56.557 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.557 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.817 nvme0n1 00:35:56.817 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.817 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.817 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.817 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.817 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.817 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.817 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.817 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.817 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.817 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.817 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.817 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.817 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:35:56.817 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.817 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:56.817 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:56.817 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:56.818 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzBiY2MwN2U1MTE0MzQ3MDM0NjdlZGZiNzZiNDcyNThNwJ0P: 00:35:56.818 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: 00:35:56.818 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:56.818 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:56.818 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzBiY2MwN2U1MTE0MzQ3MDM0NjdlZGZiNzZiNDcyNThNwJ0P: 00:35:56.818 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: ]] 00:35:56.818 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: 00:35:56.818 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:35:56.818 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.818 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:56.818 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:56.818 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:56.818 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.818 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:56.818 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.818 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.818 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.818 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.818 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:56.818 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:56.818 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:56.818 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.818 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.818 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:56.818 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.818 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:56.818 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:56.818 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:56.818 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:56.818 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.818 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.078 nvme0n1 00:35:57.078 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.078 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.078 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.078 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.078 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.078 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.078 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.078 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.078 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.078 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.078 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.078 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.078 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:35:57.078 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.078 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:57.078 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:57.078 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:57.078 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2U1YWI0M2MzYzBiODlmNTFlOGE0MDFhMzU2NjFkY2MzZTM4YTVjYzUwNTcxMWZkdH91EQ==: 00:35:57.078 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: 00:35:57.078 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:57.078 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:57.078 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2U1YWI0M2MzYzBiODlmNTFlOGE0MDFhMzU2NjFkY2MzZTM4YTVjYzUwNTcxMWZkdH91EQ==: 00:35:57.078 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: ]] 00:35:57.078 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: 00:35:57.078 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:35:57.078 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.078 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:57.078 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:57.078 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:57.079 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.079 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:57.079 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.079 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.079 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.079 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.079 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:57.079 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:57.079 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:57.079 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.079 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.079 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:57.079 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:57.079 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:57.079 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:57.079 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:57.079 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:57.079 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.079 12:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.339 nvme0n1 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2YzMTU0ZmMyYjliNDE4YzIwYzNlOGU0M2JhZDk2NzM5NDUzODM2ODM0MjBmNWJkNWU1MTk2MDA0ZGM3MmVkMK0NDKA=: 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2YzMTU0ZmMyYjliNDE4YzIwYzNlOGU0M2JhZDk2NzM5NDUzODM2ODM0MjBmNWJkNWU1MTk2MDA0ZGM3MmVkMK0NDKA=: 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.339 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.599 nvme0n1 00:35:57.599 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.599 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.599 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.599 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.599 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.599 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.599 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.599 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGE2MmIzMDI4MWUyYjk3MDcyOTYwOGYwODg4Mzg3MjZJ2cL2: 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGE2MmIzMDI4MWUyYjk3MDcyOTYwOGYwODg4Mzg3MjZJ2cL2: 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: ]] 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.600 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.858 nvme0n1 00:35:57.858 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.858 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.858 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.858 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.858 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.858 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzg3YWQ4MmFjMjgyNmNmMjM5MDgxZTZkOTc1YWY1N2E2ZDNlMDJkNDFjNjdmYzEzV24L6Q==: 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzg3YWQ4MmFjMjgyNmNmMjM5MDgxZTZkOTc1YWY1N2E2ZDNlMDJkNDFjNjdmYzEzV24L6Q==: 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: ]] 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.118 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.378 nvme0n1 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzBiY2MwN2U1MTE0MzQ3MDM0NjdlZGZiNzZiNDcyNThNwJ0P: 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzBiY2MwN2U1MTE0MzQ3MDM0NjdlZGZiNzZiNDcyNThNwJ0P: 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: ]] 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.378 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.637 nvme0n1 00:35:58.637 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.637 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.637 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.637 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.637 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.637 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.637 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.637 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.637 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.637 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.637 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.637 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.637 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:35:58.637 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.637 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:58.637 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:58.637 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:58.637 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2U1YWI0M2MzYzBiODlmNTFlOGE0MDFhMzU2NjFkY2MzZTM4YTVjYzUwNTcxMWZkdH91EQ==: 00:35:58.637 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: 00:35:58.637 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:58.637 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:58.637 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2U1YWI0M2MzYzBiODlmNTFlOGE0MDFhMzU2NjFkY2MzZTM4YTVjYzUwNTcxMWZkdH91EQ==: 00:35:58.637 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: ]] 00:35:58.637 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: 00:35:58.637 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:35:58.637 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.637 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:58.637 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:58.637 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:58.637 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.637 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:58.637 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.637 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.896 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.896 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.896 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:58.896 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:58.896 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:58.896 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.896 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.896 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:58.896 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:58.896 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:58.896 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:58.896 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:58.896 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:58.896 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.896 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.155 nvme0n1 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2YzMTU0ZmMyYjliNDE4YzIwYzNlOGU0M2JhZDk2NzM5NDUzODM2ODM0MjBmNWJkNWU1MTk2MDA0ZGM3MmVkMK0NDKA=: 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2YzMTU0ZmMyYjliNDE4YzIwYzNlOGU0M2JhZDk2NzM5NDUzODM2ODM0MjBmNWJkNWU1MTk2MDA0ZGM3MmVkMK0NDKA=: 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.155 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:59.156 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:59.156 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:59.156 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:59.156 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.156 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.416 nvme0n1 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGE2MmIzMDI4MWUyYjk3MDcyOTYwOGYwODg4Mzg3MjZJ2cL2: 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGE2MmIzMDI4MWUyYjk3MDcyOTYwOGYwODg4Mzg3MjZJ2cL2: 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: ]] 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.416 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.985 nvme0n1 00:35:59.985 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.985 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.985 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.985 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.985 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.985 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.985 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.985 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.985 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.985 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.985 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.985 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.985 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:35:59.985 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.985 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:59.985 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:59.985 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:59.985 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzg3YWQ4MmFjMjgyNmNmMjM5MDgxZTZkOTc1YWY1N2E2ZDNlMDJkNDFjNjdmYzEzV24L6Q==: 00:35:59.985 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: 00:35:59.985 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:59.985 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:59.985 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzg3YWQ4MmFjMjgyNmNmMjM5MDgxZTZkOTc1YWY1N2E2ZDNlMDJkNDFjNjdmYzEzV24L6Q==: 00:35:59.985 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: ]] 00:35:59.985 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: 00:35:59.985 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:35:59.985 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.985 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:59.985 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:59.985 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:59.985 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.985 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:59.985 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.985 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.245 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.245 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.245 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:00.245 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:00.245 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:00.245 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.245 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.245 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:00.245 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:00.245 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:00.245 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:00.245 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:00.245 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:00.245 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.245 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.810 nvme0n1 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzBiY2MwN2U1MTE0MzQ3MDM0NjdlZGZiNzZiNDcyNThNwJ0P: 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzBiY2MwN2U1MTE0MzQ3MDM0NjdlZGZiNzZiNDcyNThNwJ0P: 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: ]] 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.810 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.376 nvme0n1 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2U1YWI0M2MzYzBiODlmNTFlOGE0MDFhMzU2NjFkY2MzZTM4YTVjYzUwNTcxMWZkdH91EQ==: 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2U1YWI0M2MzYzBiODlmNTFlOGE0MDFhMzU2NjFkY2MzZTM4YTVjYzUwNTcxMWZkdH91EQ==: 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: ]] 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.376 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.947 nvme0n1 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2YzMTU0ZmMyYjliNDE4YzIwYzNlOGU0M2JhZDk2NzM5NDUzODM2ODM0MjBmNWJkNWU1MTk2MDA0ZGM3MmVkMK0NDKA=: 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2YzMTU0ZmMyYjliNDE4YzIwYzNlOGU0M2JhZDk2NzM5NDUzODM2ODM0MjBmNWJkNWU1MTk2MDA0ZGM3MmVkMK0NDKA=: 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.947 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.516 nvme0n1 00:36:02.516 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.516 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:02.516 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.516 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.516 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:02.516 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.516 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:02.516 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:02.516 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.516 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.517 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.517 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:02.517 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:02.517 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:36:02.517 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:02.517 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:02.517 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:02.517 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:02.517 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGE2MmIzMDI4MWUyYjk3MDcyOTYwOGYwODg4Mzg3MjZJ2cL2: 00:36:02.517 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: 00:36:02.517 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:02.517 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:02.517 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGE2MmIzMDI4MWUyYjk3MDcyOTYwOGYwODg4Mzg3MjZJ2cL2: 00:36:02.517 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: ]] 00:36:02.517 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZlZTZmMmI4YWVlNGM0ZGYzODJkMTY3NDgwMDhjODQ5ZWViM2NjZGZkNWUzMzYxM2Y4YThiM2VjNmViNzE0NX75PNQ=: 00:36:02.517 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:36:02.517 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:02.517 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:02.517 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:02.517 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:02.517 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:02.517 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:02.517 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.517 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.517 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.517 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:02.517 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:02.517 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:02.517 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:02.517 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:02.517 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:02.517 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:02.776 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:02.776 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:02.776 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:02.776 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:02.776 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:02.777 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.777 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.714 nvme0n1 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzg3YWQ4MmFjMjgyNmNmMjM5MDgxZTZkOTc1YWY1N2E2ZDNlMDJkNDFjNjdmYzEzV24L6Q==: 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzg3YWQ4MmFjMjgyNmNmMjM5MDgxZTZkOTc1YWY1N2E2ZDNlMDJkNDFjNjdmYzEzV24L6Q==: 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: ]] 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.714 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.654 nvme0n1 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzBiY2MwN2U1MTE0MzQ3MDM0NjdlZGZiNzZiNDcyNThNwJ0P: 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzBiY2MwN2U1MTE0MzQ3MDM0NjdlZGZiNzZiNDcyNThNwJ0P: 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: ]] 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.654 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.589 nvme0n1 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2U1YWI0M2MzYzBiODlmNTFlOGE0MDFhMzU2NjFkY2MzZTM4YTVjYzUwNTcxMWZkdH91EQ==: 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2U1YWI0M2MzYzBiODlmNTFlOGE0MDFhMzU2NjFkY2MzZTM4YTVjYzUwNTcxMWZkdH91EQ==: 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: ]] 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWE5OTdiY2U5MmVhYTc2M2MyOWViYWQ3ZGIxYzU4MTHuLznb: 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.589 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.967 nvme0n1 00:36:06.967 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.967 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:06.967 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.967 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.967 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.967 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.967 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.967 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:06.967 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.967 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.967 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.967 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:06.967 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:36:06.967 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.967 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:06.967 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:06.967 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:06.967 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2YzMTU0ZmMyYjliNDE4YzIwYzNlOGU0M2JhZDk2NzM5NDUzODM2ODM0MjBmNWJkNWU1MTk2MDA0ZGM3MmVkMK0NDKA=: 00:36:06.967 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:06.967 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:06.967 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:06.967 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2YzMTU0ZmMyYjliNDE4YzIwYzNlOGU0M2JhZDk2NzM5NDUzODM2ODM0MjBmNWJkNWU1MTk2MDA0ZGM3MmVkMK0NDKA=: 00:36:06.967 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:06.967 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:36:06.967 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:06.967 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:06.967 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:06.967 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:06.967 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:06.967 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:06.967 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.967 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.967 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.968 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:06.968 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:06.968 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:06.968 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:06.968 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:06.968 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:06.968 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:06.968 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:06.968 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:06.968 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:06.968 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:06.968 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:06.968 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.968 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.906 nvme0n1 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzg3YWQ4MmFjMjgyNmNmMjM5MDgxZTZkOTc1YWY1N2E2ZDNlMDJkNDFjNjdmYzEzV24L6Q==: 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzg3YWQ4MmFjMjgyNmNmMjM5MDgxZTZkOTc1YWY1N2E2ZDNlMDJkNDFjNjdmYzEzV24L6Q==: 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: ]] 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.906 request: 00:36:07.906 { 00:36:07.906 "name": "nvme0", 00:36:07.906 "trtype": "tcp", 00:36:07.906 "traddr": "10.0.0.1", 00:36:07.906 "adrfam": "ipv4", 00:36:07.906 "trsvcid": "4420", 00:36:07.906 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:07.906 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:07.906 "prchk_reftag": false, 00:36:07.906 "prchk_guard": false, 00:36:07.906 "hdgst": false, 00:36:07.906 "ddgst": false, 00:36:07.906 "allow_unrecognized_csi": false, 00:36:07.906 "method": "bdev_nvme_attach_controller", 00:36:07.906 "req_id": 1 00:36:07.906 } 00:36:07.906 Got JSON-RPC error response 00:36:07.906 response: 00:36:07.906 { 00:36:07.906 "code": -5, 00:36:07.906 "message": "Input/output error" 00:36:07.906 } 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:07.906 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.907 request: 00:36:07.907 { 00:36:07.907 "name": "nvme0", 00:36:07.907 "trtype": "tcp", 00:36:07.907 "traddr": "10.0.0.1", 00:36:07.907 "adrfam": "ipv4", 00:36:07.907 "trsvcid": "4420", 00:36:07.907 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:07.907 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:07.907 "prchk_reftag": false, 00:36:07.907 "prchk_guard": false, 00:36:07.907 "hdgst": false, 00:36:07.907 "ddgst": false, 00:36:07.907 "dhchap_key": "key2", 00:36:07.907 "allow_unrecognized_csi": false, 00:36:07.907 "method": "bdev_nvme_attach_controller", 00:36:07.907 "req_id": 1 00:36:07.907 } 00:36:07.907 Got JSON-RPC error response 00:36:07.907 response: 00:36:07.907 { 00:36:07.907 "code": -5, 00:36:07.907 "message": "Input/output error" 00:36:07.907 } 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.907 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.167 request: 00:36:08.167 { 00:36:08.167 "name": "nvme0", 00:36:08.167 "trtype": "tcp", 00:36:08.167 "traddr": "10.0.0.1", 00:36:08.167 "adrfam": "ipv4", 00:36:08.167 "trsvcid": "4420", 00:36:08.167 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:08.167 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:08.167 "prchk_reftag": false, 00:36:08.167 "prchk_guard": false, 00:36:08.167 "hdgst": false, 00:36:08.167 "ddgst": false, 00:36:08.167 "dhchap_key": "key1", 00:36:08.167 "dhchap_ctrlr_key": "ckey2", 00:36:08.167 "allow_unrecognized_csi": false, 00:36:08.167 "method": "bdev_nvme_attach_controller", 00:36:08.167 "req_id": 1 00:36:08.167 } 00:36:08.167 Got JSON-RPC error response 00:36:08.167 response: 00:36:08.167 { 00:36:08.167 "code": -5, 00:36:08.167 "message": "Input/output error" 00:36:08.167 } 00:36:08.167 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:08.167 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:08.167 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:08.167 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:08.167 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:08.167 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:36:08.167 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:08.167 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:08.167 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:08.167 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.167 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.167 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:08.167 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:08.167 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:08.167 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:08.167 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:08.167 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:08.167 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.167 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.167 nvme0n1 00:36:08.167 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.167 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:08.167 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.167 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:08.167 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:08.167 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:08.167 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzBiY2MwN2U1MTE0MzQ3MDM0NjdlZGZiNzZiNDcyNThNwJ0P: 00:36:08.167 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: 00:36:08.167 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:08.167 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:08.167 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzBiY2MwN2U1MTE0MzQ3MDM0NjdlZGZiNzZiNDcyNThNwJ0P: 00:36:08.167 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: ]] 00:36:08.167 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: 00:36:08.167 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:08.167 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.167 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.426 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.426 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.426 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:36:08.426 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.426 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.426 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.426 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:08.426 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:08.426 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:08.426 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:08.426 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:08.426 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:08.426 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:08.426 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:08.426 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:08.426 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.426 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.426 request: 00:36:08.426 { 00:36:08.426 "name": "nvme0", 00:36:08.426 "dhchap_key": "key1", 00:36:08.426 "dhchap_ctrlr_key": "ckey2", 00:36:08.426 "method": "bdev_nvme_set_keys", 00:36:08.426 "req_id": 1 00:36:08.426 } 00:36:08.426 Got JSON-RPC error response 00:36:08.426 response: 00:36:08.426 { 00:36:08.426 "code": -13, 00:36:08.426 "message": "Permission denied" 00:36:08.426 } 00:36:08.426 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:08.426 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:08.426 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:08.426 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:08.426 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:08.426 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.426 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:08.426 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.426 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.426 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.426 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:08.426 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:09.363 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:09.364 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.364 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.364 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:09.364 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.622 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:09.622 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:10.558 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:10.558 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:10.558 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.558 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.558 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.558 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:36:10.558 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:10.558 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:10.558 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:10.558 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:10.558 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:10.558 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzg3YWQ4MmFjMjgyNmNmMjM5MDgxZTZkOTc1YWY1N2E2ZDNlMDJkNDFjNjdmYzEzV24L6Q==: 00:36:10.558 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: 00:36:10.558 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:10.558 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:10.558 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzg3YWQ4MmFjMjgyNmNmMjM5MDgxZTZkOTc1YWY1N2E2ZDNlMDJkNDFjNjdmYzEzV24L6Q==: 00:36:10.558 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: ]] 00:36:10.558 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTExMGI4ZmM4NzMwODM4ZWJmNTk5YmY2ZTE2YzM3MzE5ZDhmMDkyMjRhYzdmNTI5JgzHvg==: 00:36:10.558 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:36:10.558 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:10.558 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:10.558 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:10.558 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:10.558 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:10.558 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:10.558 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:10.558 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:10.558 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:10.558 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:10.558 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:10.558 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.559 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.819 nvme0n1 00:36:10.819 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.819 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:10.819 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:10.819 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:10.819 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:10.819 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:10.819 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzBiY2MwN2U1MTE0MzQ3MDM0NjdlZGZiNzZiNDcyNThNwJ0P: 00:36:10.819 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: 00:36:10.819 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:10.819 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:10.819 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzBiY2MwN2U1MTE0MzQ3MDM0NjdlZGZiNzZiNDcyNThNwJ0P: 00:36:10.819 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: ]] 00:36:10.819 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjU0MGE3MzAwNDdkZDdiYzU3MzU0MTUzMmYzMjU3ODmZYVRj: 00:36:10.819 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:10.819 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:10.819 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:10.819 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:10.819 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:10.819 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:10.819 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:10.819 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:10.819 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.819 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.819 request: 00:36:10.819 { 00:36:10.819 "name": "nvme0", 00:36:10.819 "dhchap_key": "key2", 00:36:10.819 "dhchap_ctrlr_key": "ckey1", 00:36:10.819 "method": "bdev_nvme_set_keys", 00:36:10.819 "req_id": 1 00:36:10.819 } 00:36:10.819 Got JSON-RPC error response 00:36:10.819 response: 00:36:10.819 { 00:36:10.819 "code": -13, 00:36:10.819 "message": "Permission denied" 00:36:10.819 } 00:36:10.819 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:10.819 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:10.819 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:10.819 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:10.819 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:10.819 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:10.819 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.819 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.819 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:10.819 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.819 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:36:10.819 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:36:11.756 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:11.756 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.756 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.756 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:11.756 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.756 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:36:11.756 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:36:13.137 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:13.137 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:13.137 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.137 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.137 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.137 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:36:13.137 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:36:13.137 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:36:13.137 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:36:13.137 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:13.137 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:36:13.137 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:13.137 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:36:13.137 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:13.137 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:13.137 rmmod nvme_tcp 00:36:13.137 rmmod nvme_fabrics 00:36:13.137 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:13.137 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:36:13.137 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:36:13.138 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3119147 ']' 00:36:13.138 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3119147 00:36:13.138 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3119147 ']' 00:36:13.138 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3119147 00:36:13.138 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:36:13.138 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:13.138 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3119147 00:36:13.138 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:13.138 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:13.138 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3119147' 00:36:13.138 killing process with pid 3119147 00:36:13.138 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3119147 00:36:13.138 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3119147 00:36:14.078 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:14.078 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:14.078 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:14.078 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:36:14.078 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:36:14.078 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:14.078 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:36:14.078 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:14.078 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:14.078 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:14.078 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:14.078 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:16.022 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:16.022 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:16.022 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:16.022 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:36:16.022 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:36:16.022 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:36:16.022 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:16.022 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:16.022 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:16.022 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:16.022 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:36:16.022 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:36:16.022 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:16.959 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:17.217 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:17.217 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:17.217 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:17.217 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:17.217 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:17.217 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:17.217 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:17.217 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:17.217 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:17.217 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:17.217 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:17.217 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:17.217 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:17.217 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:17.217 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:18.154 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:18.154 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.lhX /tmp/spdk.key-null.BLA /tmp/spdk.key-sha256.NP4 /tmp/spdk.key-sha384.qfy /tmp/spdk.key-sha512.Cwn /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:36:18.154 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:19.532 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:36:19.532 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:19.532 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:36:19.532 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:36:19.532 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:36:19.532 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:36:19.532 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:36:19.532 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:36:19.532 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:36:19.532 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:36:19.532 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:36:19.532 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:36:19.532 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:36:19.532 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:36:19.532 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:36:19.532 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:36:19.532 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:36:19.532 00:36:19.532 real 0m56.603s 00:36:19.532 user 0m53.716s 00:36:19.532 sys 0m6.411s 00:36:19.532 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:19.532 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.532 ************************************ 00:36:19.532 END TEST nvmf_auth_host 00:36:19.532 ************************************ 00:36:19.532 12:04:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:36:19.532 12:04:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:19.532 12:04:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:19.532 12:04:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:19.532 12:04:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.532 ************************************ 00:36:19.532 START TEST nvmf_digest 00:36:19.532 ************************************ 00:36:19.532 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:19.790 * Looking for test storage... 00:36:19.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:19.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:19.791 --rc genhtml_branch_coverage=1 00:36:19.791 --rc genhtml_function_coverage=1 00:36:19.791 --rc genhtml_legend=1 00:36:19.791 --rc geninfo_all_blocks=1 00:36:19.791 --rc geninfo_unexecuted_blocks=1 00:36:19.791 00:36:19.791 ' 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:19.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:19.791 --rc genhtml_branch_coverage=1 00:36:19.791 --rc genhtml_function_coverage=1 00:36:19.791 --rc genhtml_legend=1 00:36:19.791 --rc geninfo_all_blocks=1 00:36:19.791 --rc geninfo_unexecuted_blocks=1 00:36:19.791 00:36:19.791 ' 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:19.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:19.791 --rc genhtml_branch_coverage=1 00:36:19.791 --rc genhtml_function_coverage=1 00:36:19.791 --rc genhtml_legend=1 00:36:19.791 --rc geninfo_all_blocks=1 00:36:19.791 --rc geninfo_unexecuted_blocks=1 00:36:19.791 00:36:19.791 ' 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:19.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:19.791 --rc genhtml_branch_coverage=1 00:36:19.791 --rc genhtml_function_coverage=1 00:36:19.791 --rc genhtml_legend=1 00:36:19.791 --rc geninfo_all_blocks=1 00:36:19.791 --rc geninfo_unexecuted_blocks=1 00:36:19.791 00:36:19.791 ' 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:19.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:36:19.791 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:19.792 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:19.792 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:19.792 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:19.792 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:19.792 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:19.792 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:19.792 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:19.792 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:19.792 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:19.792 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:36:19.792 12:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:21.708 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:21.708 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:36:21.708 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:21.708 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:21.708 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:21.708 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:21.708 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:21.708 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:36:21.708 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:21.708 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:21.709 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:21.709 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:21.709 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:21.709 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:21.709 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:21.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:21.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:36:21.970 00:36:21.970 --- 10.0.0.2 ping statistics --- 00:36:21.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:21.970 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:21.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:21.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:36:21.970 00:36:21.970 --- 10.0.0.1 ping statistics --- 00:36:21.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:21.970 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:21.970 ************************************ 00:36:21.970 START TEST nvmf_digest_clean 00:36:21.970 ************************************ 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3129412 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3129412 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3129412 ']' 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:21.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:21.970 12:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:21.970 [2024-11-18 12:04:47.827191] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:36:21.970 [2024-11-18 12:04:47.827330] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:22.230 [2024-11-18 12:04:47.968404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:22.230 [2024-11-18 12:04:48.086691] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:22.230 [2024-11-18 12:04:48.086788] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:22.230 [2024-11-18 12:04:48.086820] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:22.230 [2024-11-18 12:04:48.086856] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:22.230 [2024-11-18 12:04:48.086874] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:22.230 [2024-11-18 12:04:48.088338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:23.169 12:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:23.169 12:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:23.169 12:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:23.169 12:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:23.169 12:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:23.169 12:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:23.169 12:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:36:23.169 12:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:36:23.169 12:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:36:23.169 12:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.169 12:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:23.427 null0 00:36:23.427 [2024-11-18 12:04:49.232391] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:23.427 [2024-11-18 12:04:49.256707] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:23.427 12:04:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.427 12:04:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:36:23.427 12:04:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:23.427 12:04:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:23.427 12:04:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:23.427 12:04:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:23.427 12:04:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:23.427 12:04:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:23.427 12:04:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3129567 00:36:23.427 12:04:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:23.427 12:04:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3129567 /var/tmp/bperf.sock 00:36:23.427 12:04:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3129567 ']' 00:36:23.427 12:04:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:23.427 12:04:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:23.427 12:04:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:23.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:23.427 12:04:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:23.427 12:04:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:23.685 [2024-11-18 12:04:49.343228] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:36:23.685 [2024-11-18 12:04:49.343369] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3129567 ] 00:36:23.685 [2024-11-18 12:04:49.485896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:23.942 [2024-11-18 12:04:49.624476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:24.509 12:04:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:24.509 12:04:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:24.509 12:04:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:24.509 12:04:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:24.509 12:04:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:25.076 12:04:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:25.076 12:04:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:25.649 nvme0n1 00:36:25.649 12:04:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:25.649 12:04:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:25.649 Running I/O for 2 seconds... 00:36:27.971 14420.00 IOPS, 56.33 MiB/s [2024-11-18T11:04:53.856Z] 14092.00 IOPS, 55.05 MiB/s 00:36:27.971 Latency(us) 00:36:27.971 [2024-11-18T11:04:53.856Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:27.971 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:27.971 nvme0n1 : 2.04 13845.58 54.08 0.00 0.00 9055.53 4684.61 47768.46 00:36:27.971 [2024-11-18T11:04:53.856Z] =================================================================================================================== 00:36:27.971 [2024-11-18T11:04:53.856Z] Total : 13845.58 54.08 0.00 0.00 9055.53 4684.61 47768.46 00:36:27.971 { 00:36:27.971 "results": [ 00:36:27.971 { 00:36:27.971 "job": "nvme0n1", 00:36:27.971 "core_mask": "0x2", 00:36:27.971 "workload": "randread", 00:36:27.971 "status": "finished", 00:36:27.971 "queue_depth": 128, 00:36:27.971 "io_size": 4096, 00:36:27.971 "runtime": 2.044841, 00:36:27.971 "iops": 13845.575279447155, 00:36:27.971 "mibps": 54.08427843534045, 00:36:27.971 "io_failed": 0, 00:36:27.971 "io_timeout": 0, 00:36:27.971 "avg_latency_us": 9055.531002898915, 00:36:27.971 "min_latency_us": 4684.61037037037, 00:36:27.971 "max_latency_us": 47768.462222222224 00:36:27.971 } 00:36:27.971 ], 00:36:27.971 "core_count": 1 00:36:27.971 } 00:36:27.971 12:04:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:27.971 12:04:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:27.971 12:04:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:27.971 12:04:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:27.971 12:04:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:27.971 | select(.opcode=="crc32c") 00:36:27.971 | "\(.module_name) \(.executed)"' 00:36:27.971 12:04:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:27.971 12:04:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:27.971 12:04:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:27.971 12:04:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:27.971 12:04:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3129567 00:36:27.971 12:04:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3129567 ']' 00:36:27.971 12:04:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3129567 00:36:27.971 12:04:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:27.971 12:04:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:27.971 12:04:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3129567 00:36:28.230 12:04:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:28.230 12:04:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:28.230 12:04:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3129567' 00:36:28.230 killing process with pid 3129567 00:36:28.230 12:04:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3129567 00:36:28.230 Received shutdown signal, test time was about 2.000000 seconds 00:36:28.230 00:36:28.230 Latency(us) 00:36:28.230 [2024-11-18T11:04:54.115Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:28.230 [2024-11-18T11:04:54.115Z] =================================================================================================================== 00:36:28.230 [2024-11-18T11:04:54.115Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:28.230 12:04:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3129567 00:36:29.164 12:04:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:36:29.165 12:04:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:29.165 12:04:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:29.165 12:04:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:29.165 12:04:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:29.165 12:04:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:29.165 12:04:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:29.165 12:04:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3130228 00:36:29.165 12:04:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:29.165 12:04:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3130228 /var/tmp/bperf.sock 00:36:29.165 12:04:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3130228 ']' 00:36:29.165 12:04:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:29.165 12:04:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:29.165 12:04:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:29.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:29.165 12:04:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:29.165 12:04:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:29.165 [2024-11-18 12:04:54.820010] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:36:29.165 [2024-11-18 12:04:54.820135] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3130228 ] 00:36:29.165 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:29.165 Zero copy mechanism will not be used. 00:36:29.165 [2024-11-18 12:04:54.964243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:29.423 [2024-11-18 12:04:55.100037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:29.989 12:04:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:29.989 12:04:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:29.989 12:04:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:29.989 12:04:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:29.989 12:04:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:30.556 12:04:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:30.556 12:04:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:31.123 nvme0n1 00:36:31.123 12:04:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:31.123 12:04:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:31.382 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:31.382 Zero copy mechanism will not be used. 00:36:31.382 Running I/O for 2 seconds... 00:36:33.259 4836.00 IOPS, 604.50 MiB/s [2024-11-18T11:04:59.144Z] 4893.50 IOPS, 611.69 MiB/s 00:36:33.259 Latency(us) 00:36:33.259 [2024-11-18T11:04:59.144Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:33.259 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:33.259 nvme0n1 : 2.00 4893.14 611.64 0.00 0.00 3263.68 1116.54 13107.20 00:36:33.259 [2024-11-18T11:04:59.144Z] =================================================================================================================== 00:36:33.259 [2024-11-18T11:04:59.144Z] Total : 4893.14 611.64 0.00 0.00 3263.68 1116.54 13107.20 00:36:33.259 { 00:36:33.259 "results": [ 00:36:33.259 { 00:36:33.259 "job": "nvme0n1", 00:36:33.259 "core_mask": "0x2", 00:36:33.259 "workload": "randread", 00:36:33.259 "status": "finished", 00:36:33.259 "queue_depth": 16, 00:36:33.259 "io_size": 131072, 00:36:33.259 "runtime": 2.003418, 00:36:33.259 "iops": 4893.1376277941, 00:36:33.259 "mibps": 611.6422034742625, 00:36:33.259 "io_failed": 0, 00:36:33.259 "io_timeout": 0, 00:36:33.259 "avg_latency_us": 3263.681335947801, 00:36:33.259 "min_latency_us": 1116.5392592592593, 00:36:33.259 "max_latency_us": 13107.2 00:36:33.259 } 00:36:33.259 ], 00:36:33.259 "core_count": 1 00:36:33.259 } 00:36:33.259 12:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:33.259 12:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:33.259 12:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:33.259 12:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:33.259 12:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:33.259 | select(.opcode=="crc32c") 00:36:33.259 | "\(.module_name) \(.executed)"' 00:36:33.517 12:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:33.517 12:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:33.517 12:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:33.517 12:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:33.517 12:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3130228 00:36:33.517 12:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3130228 ']' 00:36:33.517 12:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3130228 00:36:33.517 12:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:33.517 12:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:33.517 12:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3130228 00:36:33.517 12:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:33.517 12:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:33.517 12:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3130228' 00:36:33.517 killing process with pid 3130228 00:36:33.517 12:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3130228 00:36:33.517 Received shutdown signal, test time was about 2.000000 seconds 00:36:33.517 00:36:33.517 Latency(us) 00:36:33.517 [2024-11-18T11:04:59.402Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:33.517 [2024-11-18T11:04:59.402Z] =================================================================================================================== 00:36:33.517 [2024-11-18T11:04:59.402Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:33.517 12:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3130228 00:36:34.452 12:05:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:36:34.452 12:05:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:34.452 12:05:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:34.452 12:05:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:34.452 12:05:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:34.452 12:05:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:34.452 12:05:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:34.452 12:05:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3130893 00:36:34.452 12:05:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:34.452 12:05:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3130893 /var/tmp/bperf.sock 00:36:34.452 12:05:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3130893 ']' 00:36:34.452 12:05:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:34.452 12:05:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:34.452 12:05:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:34.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:34.452 12:05:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:34.452 12:05:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:34.452 [2024-11-18 12:05:00.324106] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:36:34.452 [2024-11-18 12:05:00.324248] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3130893 ] 00:36:34.712 [2024-11-18 12:05:00.464010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:34.712 [2024-11-18 12:05:00.598163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:35.650 12:05:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:35.650 12:05:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:35.650 12:05:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:35.650 12:05:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:35.650 12:05:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:36.215 12:05:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:36.215 12:05:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:36.783 nvme0n1 00:36:36.783 12:05:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:36.783 12:05:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:36.783 Running I/O for 2 seconds... 00:36:38.661 15783.00 IOPS, 61.65 MiB/s [2024-11-18T11:05:04.546Z] 15685.50 IOPS, 61.27 MiB/s 00:36:38.661 Latency(us) 00:36:38.661 [2024-11-18T11:05:04.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:38.661 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:38.661 nvme0n1 : 2.01 15701.37 61.33 0.00 0.00 8142.02 4320.52 14660.65 00:36:38.661 [2024-11-18T11:05:04.546Z] =================================================================================================================== 00:36:38.661 [2024-11-18T11:05:04.546Z] Total : 15701.37 61.33 0.00 0.00 8142.02 4320.52 14660.65 00:36:38.661 { 00:36:38.661 "results": [ 00:36:38.661 { 00:36:38.661 "job": "nvme0n1", 00:36:38.661 "core_mask": "0x2", 00:36:38.661 "workload": "randwrite", 00:36:38.661 "status": "finished", 00:36:38.661 "queue_depth": 128, 00:36:38.661 "io_size": 4096, 00:36:38.661 "runtime": 2.006131, 00:36:38.661 "iops": 15701.367458057326, 00:36:38.661 "mibps": 61.33346663303643, 00:36:38.661 "io_failed": 0, 00:36:38.661 "io_timeout": 0, 00:36:38.661 "avg_latency_us": 8142.020374638583, 00:36:38.661 "min_latency_us": 4320.521481481482, 00:36:38.661 "max_latency_us": 14660.645925925926 00:36:38.661 } 00:36:38.661 ], 00:36:38.661 "core_count": 1 00:36:38.661 } 00:36:38.920 12:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:38.920 12:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:38.920 12:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:38.920 12:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:38.920 12:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:38.920 | select(.opcode=="crc32c") 00:36:38.920 | "\(.module_name) \(.executed)"' 00:36:39.178 12:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:39.178 12:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:39.178 12:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:39.178 12:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:39.178 12:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3130893 00:36:39.178 12:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3130893 ']' 00:36:39.178 12:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3130893 00:36:39.178 12:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:39.178 12:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:39.178 12:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3130893 00:36:39.178 12:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:39.178 12:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:39.178 12:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3130893' 00:36:39.178 killing process with pid 3130893 00:36:39.178 12:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3130893 00:36:39.178 Received shutdown signal, test time was about 2.000000 seconds 00:36:39.178 00:36:39.178 Latency(us) 00:36:39.178 [2024-11-18T11:05:05.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:39.178 [2024-11-18T11:05:05.063Z] =================================================================================================================== 00:36:39.178 [2024-11-18T11:05:05.063Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:39.178 12:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3130893 00:36:40.113 12:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:36:40.113 12:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:40.113 12:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:40.113 12:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:40.113 12:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:40.113 12:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:40.113 12:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:40.113 12:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3131552 00:36:40.113 12:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3131552 /var/tmp/bperf.sock 00:36:40.113 12:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:40.113 12:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3131552 ']' 00:36:40.113 12:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:40.113 12:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:40.113 12:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:40.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:40.113 12:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:40.113 12:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:40.113 [2024-11-18 12:05:05.858427] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:36:40.113 [2024-11-18 12:05:05.858591] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3131552 ] 00:36:40.113 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:40.113 Zero copy mechanism will not be used. 00:36:40.371 [2024-11-18 12:05:06.012293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:40.371 [2024-11-18 12:05:06.147282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:40.936 12:05:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:40.936 12:05:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:40.936 12:05:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:40.936 12:05:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:40.936 12:05:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:41.870 12:05:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:41.870 12:05:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:42.128 nvme0n1 00:36:42.128 12:05:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:42.128 12:05:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:42.386 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:42.386 Zero copy mechanism will not be used. 00:36:42.386 Running I/O for 2 seconds... 00:36:44.254 4361.00 IOPS, 545.12 MiB/s [2024-11-18T11:05:10.139Z] 4325.50 IOPS, 540.69 MiB/s 00:36:44.254 Latency(us) 00:36:44.254 [2024-11-18T11:05:10.139Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:44.254 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:44.254 nvme0n1 : 2.01 4322.20 540.28 0.00 0.00 3690.50 2985.53 12427.57 00:36:44.254 [2024-11-18T11:05:10.139Z] =================================================================================================================== 00:36:44.254 [2024-11-18T11:05:10.139Z] Total : 4322.20 540.28 0.00 0.00 3690.50 2985.53 12427.57 00:36:44.254 { 00:36:44.254 "results": [ 00:36:44.254 { 00:36:44.254 "job": "nvme0n1", 00:36:44.254 "core_mask": "0x2", 00:36:44.254 "workload": "randwrite", 00:36:44.254 "status": "finished", 00:36:44.254 "queue_depth": 16, 00:36:44.254 "io_size": 131072, 00:36:44.254 "runtime": 2.006152, 00:36:44.254 "iops": 4322.2048977345685, 00:36:44.254 "mibps": 540.2756122168211, 00:36:44.254 "io_failed": 0, 00:36:44.254 "io_timeout": 0, 00:36:44.254 "avg_latency_us": 3690.502000965329, 00:36:44.254 "min_latency_us": 2985.528888888889, 00:36:44.254 "max_latency_us": 12427.567407407407 00:36:44.254 } 00:36:44.254 ], 00:36:44.254 "core_count": 1 00:36:44.254 } 00:36:44.254 12:05:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:44.254 12:05:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:44.254 12:05:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:44.254 12:05:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:44.254 12:05:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:44.254 | select(.opcode=="crc32c") 00:36:44.254 | "\(.module_name) \(.executed)"' 00:36:44.512 12:05:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:44.512 12:05:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:44.512 12:05:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:44.512 12:05:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:44.512 12:05:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3131552 00:36:44.512 12:05:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3131552 ']' 00:36:44.512 12:05:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3131552 00:36:44.512 12:05:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:44.512 12:05:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:44.512 12:05:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3131552 00:36:44.512 12:05:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:44.512 12:05:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:44.512 12:05:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3131552' 00:36:44.512 killing process with pid 3131552 00:36:44.512 12:05:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3131552 00:36:44.512 Received shutdown signal, test time was about 2.000000 seconds 00:36:44.512 00:36:44.512 Latency(us) 00:36:44.512 [2024-11-18T11:05:10.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:44.512 [2024-11-18T11:05:10.397Z] =================================================================================================================== 00:36:44.512 [2024-11-18T11:05:10.397Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:44.512 12:05:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3131552 00:36:45.448 12:05:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3129412 00:36:45.448 12:05:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3129412 ']' 00:36:45.448 12:05:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3129412 00:36:45.448 12:05:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:45.448 12:05:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:45.448 12:05:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3129412 00:36:45.448 12:05:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:45.448 12:05:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:45.448 12:05:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3129412' 00:36:45.448 killing process with pid 3129412 00:36:45.448 12:05:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3129412 00:36:45.448 12:05:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3129412 00:36:46.823 00:36:46.823 real 0m24.742s 00:36:46.823 user 0m48.453s 00:36:46.823 sys 0m4.593s 00:36:46.823 12:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:46.823 12:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:46.823 ************************************ 00:36:46.823 END TEST nvmf_digest_clean 00:36:46.823 ************************************ 00:36:46.823 12:05:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:36:46.823 12:05:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:46.823 12:05:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:46.823 12:05:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:46.823 ************************************ 00:36:46.823 START TEST nvmf_digest_error 00:36:46.823 ************************************ 00:36:46.823 12:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:36:46.823 12:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:36:46.823 12:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:46.823 12:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:46.823 12:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:46.823 12:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3132321 00:36:46.823 12:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:46.823 12:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3132321 00:36:46.823 12:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3132321 ']' 00:36:46.823 12:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:46.823 12:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:46.823 12:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:46.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:46.823 12:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:46.823 12:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:46.823 [2024-11-18 12:05:12.622890] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:36:46.823 [2024-11-18 12:05:12.623028] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:47.081 [2024-11-18 12:05:12.776878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:47.081 [2024-11-18 12:05:12.917531] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:47.082 [2024-11-18 12:05:12.917620] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:47.082 [2024-11-18 12:05:12.917646] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:47.082 [2024-11-18 12:05:12.917671] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:47.082 [2024-11-18 12:05:12.917690] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:47.082 [2024-11-18 12:05:12.919406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:48.121 12:05:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:48.121 12:05:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:48.121 12:05:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:48.121 12:05:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:48.121 12:05:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:48.121 12:05:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:48.121 12:05:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:36:48.121 12:05:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.121 12:05:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:48.121 [2024-11-18 12:05:13.650164] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:36:48.121 12:05:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.121 12:05:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:36:48.121 12:05:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:36:48.121 12:05:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.121 12:05:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:48.121 null0 00:36:48.121 [2024-11-18 12:05:13.995785] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:48.379 [2024-11-18 12:05:14.020058] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:48.379 12:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.379 12:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:36:48.379 12:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:48.379 12:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:48.379 12:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:48.379 12:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:48.379 12:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3132536 00:36:48.379 12:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3132536 /var/tmp/bperf.sock 00:36:48.379 12:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:36:48.379 12:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3132536 ']' 00:36:48.379 12:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:48.379 12:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:48.379 12:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:48.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:48.379 12:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:48.379 12:05:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:48.379 [2024-11-18 12:05:14.107359] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:36:48.379 [2024-11-18 12:05:14.107535] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3132536 ] 00:36:48.379 [2024-11-18 12:05:14.241867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:48.637 [2024-11-18 12:05:14.374335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:49.569 12:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:49.569 12:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:49.569 12:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:49.569 12:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:49.569 12:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:49.569 12:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.569 12:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:49.569 12:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.569 12:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:49.569 12:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:50.134 nvme0n1 00:36:50.134 12:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:50.134 12:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.134 12:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:50.134 12:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.134 12:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:50.134 12:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:50.134 Running I/O for 2 seconds... 00:36:50.134 [2024-11-18 12:05:15.897235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.134 [2024-11-18 12:05:15.897311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.134 [2024-11-18 12:05:15.897344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.134 [2024-11-18 12:05:15.921129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.134 [2024-11-18 12:05:15.921182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.134 [2024-11-18 12:05:15.921223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.134 [2024-11-18 12:05:15.939519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.134 [2024-11-18 12:05:15.939579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.134 [2024-11-18 12:05:15.939603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.134 [2024-11-18 12:05:15.955574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.134 [2024-11-18 12:05:15.955615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.134 [2024-11-18 12:05:15.955639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.134 [2024-11-18 12:05:15.976983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.134 [2024-11-18 12:05:15.977033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.134 [2024-11-18 12:05:15.977061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.134 [2024-11-18 12:05:15.996075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.134 [2024-11-18 12:05:15.996123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.134 [2024-11-18 12:05:15.996153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.134 [2024-11-18 12:05:16.016033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.134 [2024-11-18 12:05:16.016078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.134 [2024-11-18 12:05:16.016121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.392 [2024-11-18 12:05:16.031976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.392 [2024-11-18 12:05:16.032024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.392 [2024-11-18 12:05:16.032053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.392 [2024-11-18 12:05:16.050762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.392 [2024-11-18 12:05:16.050831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.392 [2024-11-18 12:05:16.050861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.392 [2024-11-18 12:05:16.070690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.392 [2024-11-18 12:05:16.070734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.392 [2024-11-18 12:05:16.070791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.392 [2024-11-18 12:05:16.089568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.392 [2024-11-18 12:05:16.089626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.392 [2024-11-18 12:05:16.089652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.392 [2024-11-18 12:05:16.107420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.392 [2024-11-18 12:05:16.107467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.392 [2024-11-18 12:05:16.107505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.392 [2024-11-18 12:05:16.123086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.392 [2024-11-18 12:05:16.123134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.392 [2024-11-18 12:05:16.123163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.392 [2024-11-18 12:05:16.141867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.392 [2024-11-18 12:05:16.141915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.392 [2024-11-18 12:05:16.141945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.392 [2024-11-18 12:05:16.160143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.392 [2024-11-18 12:05:16.160190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.392 [2024-11-18 12:05:16.160219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.392 [2024-11-18 12:05:16.180935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.392 [2024-11-18 12:05:16.180983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.392 [2024-11-18 12:05:16.181013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.392 [2024-11-18 12:05:16.196600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.392 [2024-11-18 12:05:16.196639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.392 [2024-11-18 12:05:16.196663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.392 [2024-11-18 12:05:16.217895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.392 [2024-11-18 12:05:16.217943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.392 [2024-11-18 12:05:16.217973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.392 [2024-11-18 12:05:16.239267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.392 [2024-11-18 12:05:16.239314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.392 [2024-11-18 12:05:16.239352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.392 [2024-11-18 12:05:16.259650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.392 [2024-11-18 12:05:16.259692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.392 [2024-11-18 12:05:16.259719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.392 [2024-11-18 12:05:16.275363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.392 [2024-11-18 12:05:16.275410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.392 [2024-11-18 12:05:16.275439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.650 [2024-11-18 12:05:16.294182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.650 [2024-11-18 12:05:16.294231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.650 [2024-11-18 12:05:16.294260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.650 [2024-11-18 12:05:16.314382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.650 [2024-11-18 12:05:16.314430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.650 [2024-11-18 12:05:16.314459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.650 [2024-11-18 12:05:16.332861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.650 [2024-11-18 12:05:16.332909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.650 [2024-11-18 12:05:16.332938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.650 [2024-11-18 12:05:16.353504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.650 [2024-11-18 12:05:16.353572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.650 [2024-11-18 12:05:16.353598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.650 [2024-11-18 12:05:16.369796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.650 [2024-11-18 12:05:16.369857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.650 [2024-11-18 12:05:16.369886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.650 [2024-11-18 12:05:16.386769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.650 [2024-11-18 12:05:16.386808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.650 [2024-11-18 12:05:16.386853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.650 [2024-11-18 12:05:16.404629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.650 [2024-11-18 12:05:16.404669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.650 [2024-11-18 12:05:16.404693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.650 [2024-11-18 12:05:16.425891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.650 [2024-11-18 12:05:16.425938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.650 [2024-11-18 12:05:16.425967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.650 [2024-11-18 12:05:16.443934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.650 [2024-11-18 12:05:16.443982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.650 [2024-11-18 12:05:16.444011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.650 [2024-11-18 12:05:16.459857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.650 [2024-11-18 12:05:16.459904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.650 [2024-11-18 12:05:16.459933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.650 [2024-11-18 12:05:16.478239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.650 [2024-11-18 12:05:16.478286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.650 [2024-11-18 12:05:16.478315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.650 [2024-11-18 12:05:16.495861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.650 [2024-11-18 12:05:16.495909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.650 [2024-11-18 12:05:16.495938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.650 [2024-11-18 12:05:16.515144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.650 [2024-11-18 12:05:16.515192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.650 [2024-11-18 12:05:16.515220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.651 [2024-11-18 12:05:16.533905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.651 [2024-11-18 12:05:16.533953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.651 [2024-11-18 12:05:16.533983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.909 [2024-11-18 12:05:16.554122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.909 [2024-11-18 12:05:16.554170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.909 [2024-11-18 12:05:16.554209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.909 [2024-11-18 12:05:16.575503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.909 [2024-11-18 12:05:16.575565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.909 [2024-11-18 12:05:16.575591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.909 [2024-11-18 12:05:16.591206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.909 [2024-11-18 12:05:16.591255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.909 [2024-11-18 12:05:16.591284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.909 [2024-11-18 12:05:16.610867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.909 [2024-11-18 12:05:16.610915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.909 [2024-11-18 12:05:16.610945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.909 [2024-11-18 12:05:16.631095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.909 [2024-11-18 12:05:16.631142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.909 [2024-11-18 12:05:16.631171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.909 [2024-11-18 12:05:16.648383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.909 [2024-11-18 12:05:16.648430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.909 [2024-11-18 12:05:16.648460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.909 [2024-11-18 12:05:16.663262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.909 [2024-11-18 12:05:16.663309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.909 [2024-11-18 12:05:16.663339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.909 [2024-11-18 12:05:16.682034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.909 [2024-11-18 12:05:16.682081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.909 [2024-11-18 12:05:16.682110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.909 [2024-11-18 12:05:16.701419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.909 [2024-11-18 12:05:16.701467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.909 [2024-11-18 12:05:16.701520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.909 [2024-11-18 12:05:16.721720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.909 [2024-11-18 12:05:16.721775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.909 [2024-11-18 12:05:16.721801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.909 [2024-11-18 12:05:16.738740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.909 [2024-11-18 12:05:16.738795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.909 [2024-11-18 12:05:16.738835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.909 [2024-11-18 12:05:16.760625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.909 [2024-11-18 12:05:16.760669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.909 [2024-11-18 12:05:16.760695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.909 [2024-11-18 12:05:16.776518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.909 [2024-11-18 12:05:16.776572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.909 [2024-11-18 12:05:16.776596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.167 [2024-11-18 12:05:16.796183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.167 [2024-11-18 12:05:16.796223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.167 [2024-11-18 12:05:16.796247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.167 [2024-11-18 12:05:16.814247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.167 [2024-11-18 12:05:16.814294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.167 [2024-11-18 12:05:16.814324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.167 [2024-11-18 12:05:16.832355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.167 [2024-11-18 12:05:16.832401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.167 [2024-11-18 12:05:16.832430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.167 [2024-11-18 12:05:16.849280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.168 [2024-11-18 12:05:16.849328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.168 [2024-11-18 12:05:16.849357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.168 13444.00 IOPS, 52.52 MiB/s [2024-11-18T11:05:17.053Z] [2024-11-18 12:05:16.872850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.168 [2024-11-18 12:05:16.872914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.168 [2024-11-18 12:05:16.872953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.168 [2024-11-18 12:05:16.896066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.168 [2024-11-18 12:05:16.896114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.168 [2024-11-18 12:05:16.896142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.168 [2024-11-18 12:05:16.917383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.168 [2024-11-18 12:05:16.917430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.168 [2024-11-18 12:05:16.917461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.168 [2024-11-18 12:05:16.934537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.168 [2024-11-18 12:05:16.934578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.168 [2024-11-18 12:05:16.934602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.168 [2024-11-18 12:05:16.951781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.168 [2024-11-18 12:05:16.951823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.168 [2024-11-18 12:05:16.951865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.168 [2024-11-18 12:05:16.969886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.168 [2024-11-18 12:05:16.969932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.168 [2024-11-18 12:05:16.969961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.168 [2024-11-18 12:05:16.989668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.168 [2024-11-18 12:05:16.989708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.168 [2024-11-18 12:05:16.989731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.168 [2024-11-18 12:05:17.008026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.168 [2024-11-18 12:05:17.008075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.168 [2024-11-18 12:05:17.008104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.168 [2024-11-18 12:05:17.027267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.168 [2024-11-18 12:05:17.027316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.168 [2024-11-18 12:05:17.027346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.168 [2024-11-18 12:05:17.044340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.168 [2024-11-18 12:05:17.044390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.168 [2024-11-18 12:05:17.044420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.426 [2024-11-18 12:05:17.063929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.426 [2024-11-18 12:05:17.063978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.426 [2024-11-18 12:05:17.064007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.426 [2024-11-18 12:05:17.085536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.426 [2024-11-18 12:05:17.085602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.426 [2024-11-18 12:05:17.085629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.426 [2024-11-18 12:05:17.102642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.426 [2024-11-18 12:05:17.102686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.426 [2024-11-18 12:05:17.102712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.426 [2024-11-18 12:05:17.120071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.426 [2024-11-18 12:05:17.120121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.426 [2024-11-18 12:05:17.120150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.426 [2024-11-18 12:05:17.140322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.426 [2024-11-18 12:05:17.140369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.426 [2024-11-18 12:05:17.140399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.426 [2024-11-18 12:05:17.160694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.426 [2024-11-18 12:05:17.160737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.426 [2024-11-18 12:05:17.160763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.426 [2024-11-18 12:05:17.175998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.426 [2024-11-18 12:05:17.176046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.426 [2024-11-18 12:05:17.176075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.426 [2024-11-18 12:05:17.196357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.426 [2024-11-18 12:05:17.196406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.427 [2024-11-18 12:05:17.196445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.427 [2024-11-18 12:05:17.217411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.427 [2024-11-18 12:05:17.217460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.427 [2024-11-18 12:05:17.217497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.427 [2024-11-18 12:05:17.235726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.427 [2024-11-18 12:05:17.235784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.427 [2024-11-18 12:05:17.235810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.427 [2024-11-18 12:05:17.252358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.427 [2024-11-18 12:05:17.252406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.427 [2024-11-18 12:05:17.252435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.427 [2024-11-18 12:05:17.275951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.427 [2024-11-18 12:05:17.276000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.427 [2024-11-18 12:05:17.276029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.427 [2024-11-18 12:05:17.297346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.427 [2024-11-18 12:05:17.297395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.427 [2024-11-18 12:05:17.297425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.427 [2024-11-18 12:05:17.311628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.427 [2024-11-18 12:05:17.311667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.427 [2024-11-18 12:05:17.311691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.685 [2024-11-18 12:05:17.331593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.685 [2024-11-18 12:05:17.331633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.685 [2024-11-18 12:05:17.331658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.685 [2024-11-18 12:05:17.354414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.685 [2024-11-18 12:05:17.354471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.685 [2024-11-18 12:05:17.354510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.685 [2024-11-18 12:05:17.376792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.685 [2024-11-18 12:05:17.376859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.685 [2024-11-18 12:05:17.376889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.685 [2024-11-18 12:05:17.397614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.685 [2024-11-18 12:05:17.397661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.685 [2024-11-18 12:05:17.397688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.685 [2024-11-18 12:05:17.413926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.685 [2024-11-18 12:05:17.413968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.685 [2024-11-18 12:05:17.413993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.685 [2024-11-18 12:05:17.431753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.685 [2024-11-18 12:05:17.431813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.685 [2024-11-18 12:05:17.431839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.685 [2024-11-18 12:05:17.448088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.685 [2024-11-18 12:05:17.448144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.685 [2024-11-18 12:05:17.448170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.685 [2024-11-18 12:05:17.464630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.685 [2024-11-18 12:05:17.464674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.685 [2024-11-18 12:05:17.464699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.685 [2024-11-18 12:05:17.482285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.685 [2024-11-18 12:05:17.482330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.685 [2024-11-18 12:05:17.482357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.685 [2024-11-18 12:05:17.505036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.685 [2024-11-18 12:05:17.505084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.685 [2024-11-18 12:05:17.505112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.685 [2024-11-18 12:05:17.519634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.686 [2024-11-18 12:05:17.519692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.686 [2024-11-18 12:05:17.519729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.686 [2024-11-18 12:05:17.537445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.686 [2024-11-18 12:05:17.537495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.686 [2024-11-18 12:05:17.537524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.686 [2024-11-18 12:05:17.555544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.686 [2024-11-18 12:05:17.555585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.686 [2024-11-18 12:05:17.555611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.944 [2024-11-18 12:05:17.572444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.944 [2024-11-18 12:05:17.572507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.944 [2024-11-18 12:05:17.572537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.944 [2024-11-18 12:05:17.589734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.944 [2024-11-18 12:05:17.589776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.944 [2024-11-18 12:05:17.589801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.944 [2024-11-18 12:05:17.606517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.944 [2024-11-18 12:05:17.606576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.944 [2024-11-18 12:05:17.606603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.944 [2024-11-18 12:05:17.623387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.944 [2024-11-18 12:05:17.623442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.944 [2024-11-18 12:05:17.623468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.944 [2024-11-18 12:05:17.640048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.944 [2024-11-18 12:05:17.640102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.944 [2024-11-18 12:05:17.640128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.944 [2024-11-18 12:05:17.657032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.944 [2024-11-18 12:05:17.657089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.944 [2024-11-18 12:05:17.657116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.944 [2024-11-18 12:05:17.673938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.944 [2024-11-18 12:05:17.673994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.944 [2024-11-18 12:05:17.674020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.944 [2024-11-18 12:05:17.690840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.944 [2024-11-18 12:05:17.690894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.944 [2024-11-18 12:05:17.690921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.944 [2024-11-18 12:05:17.707587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.944 [2024-11-18 12:05:17.707627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.944 [2024-11-18 12:05:17.707652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.945 [2024-11-18 12:05:17.724435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.945 [2024-11-18 12:05:17.724498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.945 [2024-11-18 12:05:17.724527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.945 [2024-11-18 12:05:17.741229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.945 [2024-11-18 12:05:17.741284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.945 [2024-11-18 12:05:17.741310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.945 [2024-11-18 12:05:17.757932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.945 [2024-11-18 12:05:17.757987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.945 [2024-11-18 12:05:17.758013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.945 [2024-11-18 12:05:17.774834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.945 [2024-11-18 12:05:17.774875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.945 [2024-11-18 12:05:17.774915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.945 [2024-11-18 12:05:17.791752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.945 [2024-11-18 12:05:17.791807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.945 [2024-11-18 12:05:17.791832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.945 [2024-11-18 12:05:17.808261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.945 [2024-11-18 12:05:17.808316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.945 [2024-11-18 12:05:17.808352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.945 [2024-11-18 12:05:17.825376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.945 [2024-11-18 12:05:17.825429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.945 [2024-11-18 12:05:17.825455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.203 [2024-11-18 12:05:17.844993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.203 [2024-11-18 12:05:17.845053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.203 [2024-11-18 12:05:17.845079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.203 [2024-11-18 12:05:17.863625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.203 [2024-11-18 12:05:17.863684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.203 [2024-11-18 12:05:17.863710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.203 13696.00 IOPS, 53.50 MiB/s 00:36:52.203 Latency(us) 00:36:52.203 [2024-11-18T11:05:18.088Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:52.203 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:52.203 nvme0n1 : 2.00 13726.06 53.62 0.00 0.00 9313.29 4611.79 25437.68 00:36:52.203 [2024-11-18T11:05:18.088Z] =================================================================================================================== 00:36:52.203 [2024-11-18T11:05:18.088Z] Total : 13726.06 53.62 0.00 0.00 9313.29 4611.79 25437.68 00:36:52.203 { 00:36:52.203 "results": [ 00:36:52.203 { 00:36:52.203 "job": "nvme0n1", 00:36:52.203 "core_mask": "0x2", 00:36:52.203 "workload": "randread", 00:36:52.203 "status": "finished", 00:36:52.203 "queue_depth": 128, 00:36:52.203 "io_size": 4096, 00:36:52.203 "runtime": 2.004945, 00:36:52.203 "iops": 13726.06231093621, 00:36:52.203 "mibps": 53.61743090209457, 00:36:52.203 "io_failed": 0, 00:36:52.203 "io_timeout": 0, 00:36:52.203 "avg_latency_us": 9313.292347975883, 00:36:52.203 "min_latency_us": 4611.792592592593, 00:36:52.203 "max_latency_us": 25437.677037037036 00:36:52.203 } 00:36:52.203 ], 00:36:52.203 "core_count": 1 00:36:52.203 } 00:36:52.203 12:05:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:52.203 12:05:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:52.203 12:05:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:52.203 12:05:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:52.204 | .driver_specific 00:36:52.204 | .nvme_error 00:36:52.204 | .status_code 00:36:52.204 | .command_transient_transport_error' 00:36:52.461 12:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 107 > 0 )) 00:36:52.461 12:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3132536 00:36:52.461 12:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3132536 ']' 00:36:52.461 12:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3132536 00:36:52.461 12:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:52.461 12:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:52.461 12:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3132536 00:36:52.461 12:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:52.461 12:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:52.461 12:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3132536' 00:36:52.461 killing process with pid 3132536 00:36:52.461 12:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3132536 00:36:52.461 Received shutdown signal, test time was about 2.000000 seconds 00:36:52.461 00:36:52.461 Latency(us) 00:36:52.461 [2024-11-18T11:05:18.346Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:52.461 [2024-11-18T11:05:18.346Z] =================================================================================================================== 00:36:52.461 [2024-11-18T11:05:18.346Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:52.461 12:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3132536 00:36:53.395 12:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:36:53.395 12:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:53.395 12:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:53.395 12:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:53.395 12:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:53.395 12:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3133079 00:36:53.395 12:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:36:53.395 12:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3133079 /var/tmp/bperf.sock 00:36:53.395 12:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3133079 ']' 00:36:53.395 12:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:53.395 12:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:53.395 12:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:53.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:53.395 12:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:53.395 12:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:53.395 [2024-11-18 12:05:19.170006] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:36:53.395 [2024-11-18 12:05:19.170138] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3133079 ] 00:36:53.395 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:53.395 Zero copy mechanism will not be used. 00:36:53.653 [2024-11-18 12:05:19.313066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:53.653 [2024-11-18 12:05:19.449366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:54.586 12:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:54.586 12:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:54.586 12:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:54.586 12:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:54.586 12:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:54.586 12:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.586 12:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:54.586 12:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.586 12:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:54.586 12:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:55.152 nvme0n1 00:36:55.153 12:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:55.153 12:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.153 12:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:55.153 12:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.153 12:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:55.153 12:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:55.153 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:55.153 Zero copy mechanism will not be used. 00:36:55.153 Running I/O for 2 seconds... 00:36:55.153 [2024-11-18 12:05:20.941843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.153 [2024-11-18 12:05:20.941934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.153 [2024-11-18 12:05:20.941963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.153 [2024-11-18 12:05:20.948631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.153 [2024-11-18 12:05:20.948677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.153 [2024-11-18 12:05:20.948705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.153 [2024-11-18 12:05:20.955307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.153 [2024-11-18 12:05:20.955352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.153 [2024-11-18 12:05:20.955379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.153 [2024-11-18 12:05:20.962010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.153 [2024-11-18 12:05:20.962055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.153 [2024-11-18 12:05:20.962081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.153 [2024-11-18 12:05:20.968454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.153 [2024-11-18 12:05:20.968506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.153 [2024-11-18 12:05:20.968540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.153 [2024-11-18 12:05:20.975261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.153 [2024-11-18 12:05:20.975319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.153 [2024-11-18 12:05:20.975346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.153 [2024-11-18 12:05:20.981834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.153 [2024-11-18 12:05:20.981878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.153 [2024-11-18 12:05:20.981904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.153 [2024-11-18 12:05:20.988443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.153 [2024-11-18 12:05:20.988487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.153 [2024-11-18 12:05:20.988524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.153 [2024-11-18 12:05:20.992213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.153 [2024-11-18 12:05:20.992254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.153 [2024-11-18 12:05:20.992281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.153 [2024-11-18 12:05:20.998051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.153 [2024-11-18 12:05:20.998093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.153 [2024-11-18 12:05:20.998154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.153 [2024-11-18 12:05:21.002063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.153 [2024-11-18 12:05:21.002108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.153 [2024-11-18 12:05:21.002153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.153 [2024-11-18 12:05:21.007026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.153 [2024-11-18 12:05:21.007083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.153 [2024-11-18 12:05:21.007110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.153 [2024-11-18 12:05:21.012282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.153 [2024-11-18 12:05:21.012322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.153 [2024-11-18 12:05:21.012357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.153 [2024-11-18 12:05:21.016001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.153 [2024-11-18 12:05:21.016042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.153 [2024-11-18 12:05:21.016068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.153 [2024-11-18 12:05:21.020283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.153 [2024-11-18 12:05:21.020324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.153 [2024-11-18 12:05:21.020350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.153 [2024-11-18 12:05:21.025426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.153 [2024-11-18 12:05:21.025484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.153 [2024-11-18 12:05:21.025520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.153 [2024-11-18 12:05:21.029578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.153 [2024-11-18 12:05:21.029623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.153 [2024-11-18 12:05:21.029667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.153 [2024-11-18 12:05:21.035232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.153 [2024-11-18 12:05:21.035290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.153 [2024-11-18 12:05:21.035316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.412 [2024-11-18 12:05:21.041066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.412 [2024-11-18 12:05:21.041108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.412 [2024-11-18 12:05:21.041135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.413 [2024-11-18 12:05:21.047063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.413 [2024-11-18 12:05:21.047104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.413 [2024-11-18 12:05:21.047145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.413 [2024-11-18 12:05:21.053127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.413 [2024-11-18 12:05:21.053186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.413 [2024-11-18 12:05:21.053211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.413 [2024-11-18 12:05:21.059455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.413 [2024-11-18 12:05:21.059503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.413 [2024-11-18 12:05:21.059532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.413 [2024-11-18 12:05:21.065604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.413 [2024-11-18 12:05:21.065644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.413 [2024-11-18 12:05:21.065686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.413 [2024-11-18 12:05:21.071985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.413 [2024-11-18 12:05:21.072026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.413 [2024-11-18 12:05:21.072067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.413 [2024-11-18 12:05:21.078169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.413 [2024-11-18 12:05:21.078211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.413 [2024-11-18 12:05:21.078237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.413 [2024-11-18 12:05:21.084478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.413 [2024-11-18 12:05:21.084545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.413 [2024-11-18 12:05:21.084572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.413 [2024-11-18 12:05:21.090684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.413 [2024-11-18 12:05:21.090726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.413 [2024-11-18 12:05:21.090753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.413 [2024-11-18 12:05:21.097004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.413 [2024-11-18 12:05:21.097046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.413 [2024-11-18 12:05:21.097086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.413 [2024-11-18 12:05:21.103280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.413 [2024-11-18 12:05:21.103338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.413 [2024-11-18 12:05:21.103363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.413 [2024-11-18 12:05:21.109332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.413 [2024-11-18 12:05:21.109389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.413 [2024-11-18 12:05:21.109438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.413 [2024-11-18 12:05:21.115553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.413 [2024-11-18 12:05:21.115611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.413 [2024-11-18 12:05:21.115636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.413 [2024-11-18 12:05:21.121978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.413 [2024-11-18 12:05:21.122020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.413 [2024-11-18 12:05:21.122046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.413 [2024-11-18 12:05:21.128258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.413 [2024-11-18 12:05:21.128318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.413 [2024-11-18 12:05:21.128346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.413 [2024-11-18 12:05:21.134934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.413 [2024-11-18 12:05:21.134994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.413 [2024-11-18 12:05:21.135020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.413 [2024-11-18 12:05:21.141837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.413 [2024-11-18 12:05:21.141884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.413 [2024-11-18 12:05:21.141912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.413 [2024-11-18 12:05:21.149114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.413 [2024-11-18 12:05:21.149160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.413 [2024-11-18 12:05:21.149187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.413 [2024-11-18 12:05:21.154572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.413 [2024-11-18 12:05:21.154616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.413 [2024-11-18 12:05:21.154643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.413 [2024-11-18 12:05:21.160229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.413 [2024-11-18 12:05:21.160288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.413 [2024-11-18 12:05:21.160315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.413 [2024-11-18 12:05:21.167643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.413 [2024-11-18 12:05:21.167708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.413 [2024-11-18 12:05:21.167736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.413 [2024-11-18 12:05:21.175190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.413 [2024-11-18 12:05:21.175239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.413 [2024-11-18 12:05:21.175269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.413 [2024-11-18 12:05:21.183912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.413 [2024-11-18 12:05:21.183961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.413 [2024-11-18 12:05:21.183992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.413 [2024-11-18 12:05:21.192265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.413 [2024-11-18 12:05:21.192325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.413 [2024-11-18 12:05:21.192357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.413 [2024-11-18 12:05:21.201676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.413 [2024-11-18 12:05:21.201738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.413 [2024-11-18 12:05:21.201765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.413 [2024-11-18 12:05:21.210709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.414 [2024-11-18 12:05:21.210753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.414 [2024-11-18 12:05:21.210781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.414 [2024-11-18 12:05:21.220061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.414 [2024-11-18 12:05:21.220121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.414 [2024-11-18 12:05:21.220146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.414 [2024-11-18 12:05:21.226085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.414 [2024-11-18 12:05:21.226129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.414 [2024-11-18 12:05:21.226157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.414 [2024-11-18 12:05:21.233968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.414 [2024-11-18 12:05:21.234023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.414 [2024-11-18 12:05:21.234058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.414 [2024-11-18 12:05:21.242756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.414 [2024-11-18 12:05:21.242801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.414 [2024-11-18 12:05:21.242828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.414 [2024-11-18 12:05:21.251598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.414 [2024-11-18 12:05:21.251655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.414 [2024-11-18 12:05:21.251697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.414 [2024-11-18 12:05:21.260769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.414 [2024-11-18 12:05:21.260826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.414 [2024-11-18 12:05:21.260867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.414 [2024-11-18 12:05:21.269528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.414 [2024-11-18 12:05:21.269585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.414 [2024-11-18 12:05:21.269611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.414 [2024-11-18 12:05:21.278367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.414 [2024-11-18 12:05:21.278422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.414 [2024-11-18 12:05:21.278463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.414 [2024-11-18 12:05:21.287928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.414 [2024-11-18 12:05:21.288001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.414 [2024-11-18 12:05:21.288043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.414 [2024-11-18 12:05:21.296395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.414 [2024-11-18 12:05:21.296440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.414 [2024-11-18 12:05:21.296467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.673 [2024-11-18 12:05:21.303407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.673 [2024-11-18 12:05:21.303465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.673 [2024-11-18 12:05:21.303498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.673 [2024-11-18 12:05:21.310110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.673 [2024-11-18 12:05:21.310176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.673 [2024-11-18 12:05:21.310202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.673 [2024-11-18 12:05:21.317167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.673 [2024-11-18 12:05:21.317225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.673 [2024-11-18 12:05:21.317251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.673 [2024-11-18 12:05:21.324011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.673 [2024-11-18 12:05:21.324068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.673 [2024-11-18 12:05:21.324112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.673 [2024-11-18 12:05:21.331059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.673 [2024-11-18 12:05:21.331115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.673 [2024-11-18 12:05:21.331142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.673 [2024-11-18 12:05:21.338530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.673 [2024-11-18 12:05:21.338592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.673 [2024-11-18 12:05:21.338619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.673 [2024-11-18 12:05:21.345786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.673 [2024-11-18 12:05:21.345842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.673 [2024-11-18 12:05:21.345870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.673 [2024-11-18 12:05:21.353663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.673 [2024-11-18 12:05:21.353720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.673 [2024-11-18 12:05:21.353748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.673 [2024-11-18 12:05:21.360617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.673 [2024-11-18 12:05:21.360672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.673 [2024-11-18 12:05:21.360700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.673 [2024-11-18 12:05:21.367437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.673 [2024-11-18 12:05:21.367501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.673 [2024-11-18 12:05:21.367554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.673 [2024-11-18 12:05:21.373588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.673 [2024-11-18 12:05:21.373644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.674 [2024-11-18 12:05:21.373673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.674 [2024-11-18 12:05:21.377703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.674 [2024-11-18 12:05:21.377756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.674 [2024-11-18 12:05:21.377782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.674 [2024-11-18 12:05:21.383353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.674 [2024-11-18 12:05:21.383399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.674 [2024-11-18 12:05:21.383429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.674 [2024-11-18 12:05:21.389457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.674 [2024-11-18 12:05:21.389519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.674 [2024-11-18 12:05:21.389547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.674 [2024-11-18 12:05:21.395565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.674 [2024-11-18 12:05:21.395617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.674 [2024-11-18 12:05:21.395642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.674 [2024-11-18 12:05:21.401327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.674 [2024-11-18 12:05:21.401381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.674 [2024-11-18 12:05:21.401407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.674 [2024-11-18 12:05:21.407310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.674 [2024-11-18 12:05:21.407364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.674 [2024-11-18 12:05:21.407391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.674 [2024-11-18 12:05:21.413265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.674 [2024-11-18 12:05:21.413319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.674 [2024-11-18 12:05:21.413346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.674 [2024-11-18 12:05:21.419230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.674 [2024-11-18 12:05:21.419292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.674 [2024-11-18 12:05:21.419320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.674 [2024-11-18 12:05:21.425906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.674 [2024-11-18 12:05:21.425962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.674 [2024-11-18 12:05:21.425989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.674 [2024-11-18 12:05:21.432749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.674 [2024-11-18 12:05:21.432802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.674 [2024-11-18 12:05:21.432828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.674 [2024-11-18 12:05:21.438850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.674 [2024-11-18 12:05:21.438903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.674 [2024-11-18 12:05:21.438929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.674 [2024-11-18 12:05:21.444862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.674 [2024-11-18 12:05:21.444901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.674 [2024-11-18 12:05:21.444942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.674 [2024-11-18 12:05:21.451104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.674 [2024-11-18 12:05:21.451161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.674 [2024-11-18 12:05:21.451189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.674 [2024-11-18 12:05:21.457132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.674 [2024-11-18 12:05:21.457185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.674 [2024-11-18 12:05:21.457211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.674 [2024-11-18 12:05:21.463251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.674 [2024-11-18 12:05:21.463306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.674 [2024-11-18 12:05:21.463332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.674 [2024-11-18 12:05:21.469351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.674 [2024-11-18 12:05:21.469404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.674 [2024-11-18 12:05:21.469430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.674 [2024-11-18 12:05:21.476145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.674 [2024-11-18 12:05:21.476199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.674 [2024-11-18 12:05:21.476225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.674 [2024-11-18 12:05:21.482626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.674 [2024-11-18 12:05:21.482681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.674 [2024-11-18 12:05:21.482707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.674 [2024-11-18 12:05:21.489808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.674 [2024-11-18 12:05:21.489861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.674 [2024-11-18 12:05:21.489888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.674 [2024-11-18 12:05:21.496425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.674 [2024-11-18 12:05:21.496482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.674 [2024-11-18 12:05:21.496519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.674 [2024-11-18 12:05:21.503334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.674 [2024-11-18 12:05:21.503389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.674 [2024-11-18 12:05:21.503415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.674 [2024-11-18 12:05:21.510215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.674 [2024-11-18 12:05:21.510270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.674 [2024-11-18 12:05:21.510295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.674 [2024-11-18 12:05:21.516783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.674 [2024-11-18 12:05:21.516824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.674 [2024-11-18 12:05:21.516849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.674 [2024-11-18 12:05:21.522914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.674 [2024-11-18 12:05:21.522961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.674 [2024-11-18 12:05:21.522992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.674 [2024-11-18 12:05:21.528984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.674 [2024-11-18 12:05:21.529048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.674 [2024-11-18 12:05:21.529075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.674 [2024-11-18 12:05:21.535254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.674 [2024-11-18 12:05:21.535311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.675 [2024-11-18 12:05:21.535339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.675 [2024-11-18 12:05:21.541194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.675 [2024-11-18 12:05:21.541249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.675 [2024-11-18 12:05:21.541276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.675 [2024-11-18 12:05:21.547345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.675 [2024-11-18 12:05:21.547401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.675 [2024-11-18 12:05:21.547429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.675 [2024-11-18 12:05:21.553452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.675 [2024-11-18 12:05:21.553514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.675 [2024-11-18 12:05:21.553542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.934 [2024-11-18 12:05:21.559747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.934 [2024-11-18 12:05:21.559805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.934 [2024-11-18 12:05:21.559832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.934 [2024-11-18 12:05:21.566039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.934 [2024-11-18 12:05:21.566094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.934 [2024-11-18 12:05:21.566119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.934 [2024-11-18 12:05:21.572163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.934 [2024-11-18 12:05:21.572203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.934 [2024-11-18 12:05:21.572244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.934 [2024-11-18 12:05:21.578398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.934 [2024-11-18 12:05:21.578442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.934 [2024-11-18 12:05:21.578470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.934 [2024-11-18 12:05:21.584815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.934 [2024-11-18 12:05:21.584871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.934 [2024-11-18 12:05:21.584898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.934 [2024-11-18 12:05:21.590960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.934 [2024-11-18 12:05:21.591016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.934 [2024-11-18 12:05:21.591057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.934 [2024-11-18 12:05:21.597274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.934 [2024-11-18 12:05:21.597330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.934 [2024-11-18 12:05:21.597357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.934 [2024-11-18 12:05:21.603519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.934 [2024-11-18 12:05:21.603572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.934 [2024-11-18 12:05:21.603598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.934 [2024-11-18 12:05:21.609736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.934 [2024-11-18 12:05:21.609792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.934 [2024-11-18 12:05:21.609819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.934 [2024-11-18 12:05:21.615983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.934 [2024-11-18 12:05:21.616039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.934 [2024-11-18 12:05:21.616064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.934 [2024-11-18 12:05:21.622358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.934 [2024-11-18 12:05:21.622416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.934 [2024-11-18 12:05:21.622444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.934 [2024-11-18 12:05:21.628219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.934 [2024-11-18 12:05:21.628279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.934 [2024-11-18 12:05:21.628326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.934 [2024-11-18 12:05:21.634441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.934 [2024-11-18 12:05:21.634517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.934 [2024-11-18 12:05:21.634554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.934 [2024-11-18 12:05:21.640639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.934 [2024-11-18 12:05:21.640696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.934 [2024-11-18 12:05:21.640720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.934 [2024-11-18 12:05:21.646693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.934 [2024-11-18 12:05:21.646736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.934 [2024-11-18 12:05:21.646762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.935 [2024-11-18 12:05:21.652977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.935 [2024-11-18 12:05:21.653020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.935 [2024-11-18 12:05:21.653061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.935 [2024-11-18 12:05:21.659148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.935 [2024-11-18 12:05:21.659203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.935 [2024-11-18 12:05:21.659229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.935 [2024-11-18 12:05:21.665936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.935 [2024-11-18 12:05:21.665991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.935 [2024-11-18 12:05:21.666029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.935 [2024-11-18 12:05:21.673725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.935 [2024-11-18 12:05:21.673767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.935 [2024-11-18 12:05:21.673808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.935 [2024-11-18 12:05:21.682071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.935 [2024-11-18 12:05:21.682129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.935 [2024-11-18 12:05:21.682155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.935 [2024-11-18 12:05:21.689337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.935 [2024-11-18 12:05:21.689394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.935 [2024-11-18 12:05:21.689422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.935 [2024-11-18 12:05:21.696591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.935 [2024-11-18 12:05:21.696648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.935 [2024-11-18 12:05:21.696675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.935 [2024-11-18 12:05:21.704791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.935 [2024-11-18 12:05:21.704835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.935 [2024-11-18 12:05:21.704862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.935 [2024-11-18 12:05:21.711769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.935 [2024-11-18 12:05:21.711839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.935 [2024-11-18 12:05:21.711882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.935 [2024-11-18 12:05:21.718733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.935 [2024-11-18 12:05:21.718777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.935 [2024-11-18 12:05:21.718804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.935 [2024-11-18 12:05:21.726225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.935 [2024-11-18 12:05:21.726269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.935 [2024-11-18 12:05:21.726296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.935 [2024-11-18 12:05:21.733508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.935 [2024-11-18 12:05:21.733565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.935 [2024-11-18 12:05:21.733607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.935 [2024-11-18 12:05:21.740896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.935 [2024-11-18 12:05:21.740955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.935 [2024-11-18 12:05:21.740982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.935 [2024-11-18 12:05:21.748278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.935 [2024-11-18 12:05:21.748337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.935 [2024-11-18 12:05:21.748378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.935 [2024-11-18 12:05:21.755146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.935 [2024-11-18 12:05:21.755205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.935 [2024-11-18 12:05:21.755242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.935 [2024-11-18 12:05:21.762124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.935 [2024-11-18 12:05:21.762181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.935 [2024-11-18 12:05:21.762209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.935 [2024-11-18 12:05:21.766028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.935 [2024-11-18 12:05:21.766071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.935 [2024-11-18 12:05:21.766098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.935 [2024-11-18 12:05:21.771879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.935 [2024-11-18 12:05:21.771934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.935 [2024-11-18 12:05:21.771960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.935 [2024-11-18 12:05:21.778373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.935 [2024-11-18 12:05:21.778413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.935 [2024-11-18 12:05:21.778454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.935 [2024-11-18 12:05:21.785351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.935 [2024-11-18 12:05:21.785405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.935 [2024-11-18 12:05:21.785432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.935 [2024-11-18 12:05:21.792269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.935 [2024-11-18 12:05:21.792324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.935 [2024-11-18 12:05:21.792363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.935 [2024-11-18 12:05:21.799166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.935 [2024-11-18 12:05:21.799221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.935 [2024-11-18 12:05:21.799247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.935 [2024-11-18 12:05:21.805333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.935 [2024-11-18 12:05:21.805391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.935 [2024-11-18 12:05:21.805433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.935 [2024-11-18 12:05:21.811608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.935 [2024-11-18 12:05:21.811662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.935 [2024-11-18 12:05:21.811688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.935 [2024-11-18 12:05:21.816940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.935 [2024-11-18 12:05:21.816983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.935 [2024-11-18 12:05:21.817010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.195 [2024-11-18 12:05:21.821633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.195 [2024-11-18 12:05:21.821690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.195 [2024-11-18 12:05:21.821717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.195 [2024-11-18 12:05:21.827011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.195 [2024-11-18 12:05:21.827052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.195 [2024-11-18 12:05:21.827094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.195 [2024-11-18 12:05:21.833152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.195 [2024-11-18 12:05:21.833208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.195 [2024-11-18 12:05:21.833237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.195 [2024-11-18 12:05:21.841755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.195 [2024-11-18 12:05:21.841813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.195 [2024-11-18 12:05:21.841873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.195 [2024-11-18 12:05:21.850284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.195 [2024-11-18 12:05:21.850338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.195 [2024-11-18 12:05:21.850364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.195 [2024-11-18 12:05:21.858953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.195 [2024-11-18 12:05:21.859011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.195 [2024-11-18 12:05:21.859040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.195 [2024-11-18 12:05:21.867574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.195 [2024-11-18 12:05:21.867633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.195 [2024-11-18 12:05:21.867669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.195 [2024-11-18 12:05:21.873109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.195 [2024-11-18 12:05:21.873152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.195 [2024-11-18 12:05:21.873180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.195 [2024-11-18 12:05:21.878843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.195 [2024-11-18 12:05:21.878897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.195 [2024-11-18 12:05:21.878922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.195 [2024-11-18 12:05:21.886316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.195 [2024-11-18 12:05:21.886368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.195 [2024-11-18 12:05:21.886395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.195 [2024-11-18 12:05:21.893438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.195 [2024-11-18 12:05:21.893501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.195 [2024-11-18 12:05:21.893530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.195 [2024-11-18 12:05:21.900072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.195 [2024-11-18 12:05:21.900126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.195 [2024-11-18 12:05:21.900152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.195 [2024-11-18 12:05:21.907409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.195 [2024-11-18 12:05:21.907464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.195 [2024-11-18 12:05:21.907497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.195 [2024-11-18 12:05:21.914022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.195 [2024-11-18 12:05:21.914066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.195 [2024-11-18 12:05:21.914093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.195 [2024-11-18 12:05:21.920724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.195 [2024-11-18 12:05:21.920780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.195 [2024-11-18 12:05:21.920808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.195 [2024-11-18 12:05:21.927021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.195 [2024-11-18 12:05:21.927065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.195 [2024-11-18 12:05:21.927093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.195 [2024-11-18 12:05:21.931258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.195 [2024-11-18 12:05:21.931301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.195 [2024-11-18 12:05:21.931327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.195 4652.00 IOPS, 581.50 MiB/s [2024-11-18T11:05:22.080Z] [2024-11-18 12:05:21.939629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.195 [2024-11-18 12:05:21.939671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.195 [2024-11-18 12:05:21.939714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.195 [2024-11-18 12:05:21.945753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.195 [2024-11-18 12:05:21.945821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.195 [2024-11-18 12:05:21.945848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.195 [2024-11-18 12:05:21.949864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.195 [2024-11-18 12:05:21.949907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.196 [2024-11-18 12:05:21.949934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.196 [2024-11-18 12:05:21.956801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.196 [2024-11-18 12:05:21.956874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.196 [2024-11-18 12:05:21.956904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.196 [2024-11-18 12:05:21.964323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.196 [2024-11-18 12:05:21.964377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.196 [2024-11-18 12:05:21.964403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.196 [2024-11-18 12:05:21.971756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.196 [2024-11-18 12:05:21.971820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.196 [2024-11-18 12:05:21.971855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.196 [2024-11-18 12:05:21.978902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.196 [2024-11-18 12:05:21.978956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.196 [2024-11-18 12:05:21.979004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.196 [2024-11-18 12:05:21.985607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.196 [2024-11-18 12:05:21.985650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.196 [2024-11-18 12:05:21.985677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.196 [2024-11-18 12:05:21.992257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.196 [2024-11-18 12:05:21.992315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.196 [2024-11-18 12:05:21.992341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.196 [2024-11-18 12:05:21.998440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.196 [2024-11-18 12:05:21.998508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.196 [2024-11-18 12:05:21.998538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.196 [2024-11-18 12:05:22.004607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.196 [2024-11-18 12:05:22.004662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.196 [2024-11-18 12:05:22.004690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.196 [2024-11-18 12:05:22.010806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.196 [2024-11-18 12:05:22.010861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.196 [2024-11-18 12:05:22.010888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.196 [2024-11-18 12:05:22.017146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.196 [2024-11-18 12:05:22.017187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.196 [2024-11-18 12:05:22.017215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.196 [2024-11-18 12:05:22.023235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.196 [2024-11-18 12:05:22.023290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.196 [2024-11-18 12:05:22.023314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.196 [2024-11-18 12:05:22.029483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.196 [2024-11-18 12:05:22.029549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.196 [2024-11-18 12:05:22.029576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.196 [2024-11-18 12:05:22.035795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.196 [2024-11-18 12:05:22.035851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.196 [2024-11-18 12:05:22.035892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.196 [2024-11-18 12:05:22.041868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.196 [2024-11-18 12:05:22.041911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.196 [2024-11-18 12:05:22.041937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.196 [2024-11-18 12:05:22.048016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.196 [2024-11-18 12:05:22.048075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.196 [2024-11-18 12:05:22.048102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.196 [2024-11-18 12:05:22.054331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.196 [2024-11-18 12:05:22.054388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.196 [2024-11-18 12:05:22.054414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.196 [2024-11-18 12:05:22.060624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.196 [2024-11-18 12:05:22.060679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.196 [2024-11-18 12:05:22.060721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.196 [2024-11-18 12:05:22.066823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.196 [2024-11-18 12:05:22.066864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.196 [2024-11-18 12:05:22.066891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.196 [2024-11-18 12:05:22.073132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.196 [2024-11-18 12:05:22.073187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.196 [2024-11-18 12:05:22.073212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.196 [2024-11-18 12:05:22.079445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.196 [2024-11-18 12:05:22.079522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.196 [2024-11-18 12:05:22.079549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.456 [2024-11-18 12:05:22.085813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.456 [2024-11-18 12:05:22.085853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.456 [2024-11-18 12:05:22.085901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.456 [2024-11-18 12:05:22.092023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.456 [2024-11-18 12:05:22.092065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.456 [2024-11-18 12:05:22.092092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.456 [2024-11-18 12:05:22.098352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.456 [2024-11-18 12:05:22.098415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.456 [2024-11-18 12:05:22.098442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.456 [2024-11-18 12:05:22.104622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.456 [2024-11-18 12:05:22.104678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.456 [2024-11-18 12:05:22.104706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.456 [2024-11-18 12:05:22.111076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.456 [2024-11-18 12:05:22.111131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.456 [2024-11-18 12:05:22.111159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.456 [2024-11-18 12:05:22.117235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.456 [2024-11-18 12:05:22.117278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.456 [2024-11-18 12:05:22.117305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.456 [2024-11-18 12:05:22.123533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.456 [2024-11-18 12:05:22.123604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.456 [2024-11-18 12:05:22.123631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.456 [2024-11-18 12:05:22.129785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.456 [2024-11-18 12:05:22.129824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.456 [2024-11-18 12:05:22.129865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.456 [2024-11-18 12:05:22.136015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.456 [2024-11-18 12:05:22.136071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.456 [2024-11-18 12:05:22.136097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.456 [2024-11-18 12:05:22.142171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.456 [2024-11-18 12:05:22.142227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.456 [2024-11-18 12:05:22.142253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.456 [2024-11-18 12:05:22.148549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.456 [2024-11-18 12:05:22.148598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.456 [2024-11-18 12:05:22.148624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.456 [2024-11-18 12:05:22.155061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.456 [2024-11-18 12:05:22.155105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.456 [2024-11-18 12:05:22.155132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.456 [2024-11-18 12:05:22.161422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.456 [2024-11-18 12:05:22.161464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.456 [2024-11-18 12:05:22.161509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.456 [2024-11-18 12:05:22.167960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.456 [2024-11-18 12:05:22.168017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.456 [2024-11-18 12:05:22.168043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.456 [2024-11-18 12:05:22.174393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.456 [2024-11-18 12:05:22.174450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.456 [2024-11-18 12:05:22.174485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.456 [2024-11-18 12:05:22.180712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.456 [2024-11-18 12:05:22.180767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.456 [2024-11-18 12:05:22.180799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.456 [2024-11-18 12:05:22.186894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.456 [2024-11-18 12:05:22.186949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.456 [2024-11-18 12:05:22.186989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.456 [2024-11-18 12:05:22.192965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.456 [2024-11-18 12:05:22.193022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.456 [2024-11-18 12:05:22.193058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.456 [2024-11-18 12:05:22.199079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.456 [2024-11-18 12:05:22.199121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.456 [2024-11-18 12:05:22.199147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.456 [2024-11-18 12:05:22.205193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.456 [2024-11-18 12:05:22.205249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.456 [2024-11-18 12:05:22.205290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.456 [2024-11-18 12:05:22.211700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.456 [2024-11-18 12:05:22.211741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.456 [2024-11-18 12:05:22.211768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.457 [2024-11-18 12:05:22.218155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.457 [2024-11-18 12:05:22.218198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.457 [2024-11-18 12:05:22.218225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.457 [2024-11-18 12:05:22.224419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.457 [2024-11-18 12:05:22.224474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.457 [2024-11-18 12:05:22.224510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.457 [2024-11-18 12:05:22.230730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.457 [2024-11-18 12:05:22.230789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.457 [2024-11-18 12:05:22.230815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.457 [2024-11-18 12:05:22.235383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.457 [2024-11-18 12:05:22.235424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.457 [2024-11-18 12:05:22.235466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.457 [2024-11-18 12:05:22.240261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.457 [2024-11-18 12:05:22.240314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.457 [2024-11-18 12:05:22.240340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.457 [2024-11-18 12:05:22.246046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.457 [2024-11-18 12:05:22.246108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.457 [2024-11-18 12:05:22.246134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.457 [2024-11-18 12:05:22.252320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.457 [2024-11-18 12:05:22.252374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.457 [2024-11-18 12:05:22.252401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.457 [2024-11-18 12:05:22.258620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.457 [2024-11-18 12:05:22.258674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.457 [2024-11-18 12:05:22.258698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.457 [2024-11-18 12:05:22.265020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.457 [2024-11-18 12:05:22.265074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.457 [2024-11-18 12:05:22.265113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.457 [2024-11-18 12:05:22.271251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.457 [2024-11-18 12:05:22.271305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.457 [2024-11-18 12:05:22.271330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.457 [2024-11-18 12:05:22.277423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.457 [2024-11-18 12:05:22.277462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.457 [2024-11-18 12:05:22.277487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.457 [2024-11-18 12:05:22.283450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.457 [2024-11-18 12:05:22.283522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.457 [2024-11-18 12:05:22.283549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.457 [2024-11-18 12:05:22.289618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.457 [2024-11-18 12:05:22.289671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.457 [2024-11-18 12:05:22.289698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.457 [2024-11-18 12:05:22.295875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.457 [2024-11-18 12:05:22.295928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.457 [2024-11-18 12:05:22.295964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.457 [2024-11-18 12:05:22.302704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.457 [2024-11-18 12:05:22.302778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.457 [2024-11-18 12:05:22.302807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.457 [2024-11-18 12:05:22.309343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.457 [2024-11-18 12:05:22.309398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.457 [2024-11-18 12:05:22.309424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.457 [2024-11-18 12:05:22.315603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.457 [2024-11-18 12:05:22.315660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.457 [2024-11-18 12:05:22.315685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.457 [2024-11-18 12:05:22.322005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.457 [2024-11-18 12:05:22.322060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.457 [2024-11-18 12:05:22.322085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.457 [2024-11-18 12:05:22.328287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.457 [2024-11-18 12:05:22.328340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.457 [2024-11-18 12:05:22.328366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.457 [2024-11-18 12:05:22.334414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.457 [2024-11-18 12:05:22.334467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.457 [2024-11-18 12:05:22.334501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.457 [2024-11-18 12:05:22.340744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.457 [2024-11-18 12:05:22.340786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.457 [2024-11-18 12:05:22.340813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.717 [2024-11-18 12:05:22.346844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.717 [2024-11-18 12:05:22.346897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.717 [2024-11-18 12:05:22.346923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.717 [2024-11-18 12:05:22.353104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.717 [2024-11-18 12:05:22.353166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.717 [2024-11-18 12:05:22.353193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.717 [2024-11-18 12:05:22.359215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.717 [2024-11-18 12:05:22.359269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.717 [2024-11-18 12:05:22.359293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.717 [2024-11-18 12:05:22.365295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.717 [2024-11-18 12:05:22.365351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.717 [2024-11-18 12:05:22.365378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.717 [2024-11-18 12:05:22.371560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.717 [2024-11-18 12:05:22.371614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.717 [2024-11-18 12:05:22.371639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.717 [2024-11-18 12:05:22.377821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.717 [2024-11-18 12:05:22.377875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.717 [2024-11-18 12:05:22.377900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.717 [2024-11-18 12:05:22.383803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.717 [2024-11-18 12:05:22.383858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.717 [2024-11-18 12:05:22.383884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.717 [2024-11-18 12:05:22.390006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.717 [2024-11-18 12:05:22.390063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.717 [2024-11-18 12:05:22.390090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.717 [2024-11-18 12:05:22.396306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.717 [2024-11-18 12:05:22.396358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.717 [2024-11-18 12:05:22.396384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.717 [2024-11-18 12:05:22.402334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.717 [2024-11-18 12:05:22.402388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.717 [2024-11-18 12:05:22.402435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.717 [2024-11-18 12:05:22.408327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.717 [2024-11-18 12:05:22.408383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.717 [2024-11-18 12:05:22.408421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.717 [2024-11-18 12:05:22.414613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.717 [2024-11-18 12:05:22.414651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.717 [2024-11-18 12:05:22.414692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.717 [2024-11-18 12:05:22.420738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.717 [2024-11-18 12:05:22.420794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.717 [2024-11-18 12:05:22.420819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.717 [2024-11-18 12:05:22.426821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.717 [2024-11-18 12:05:22.426875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.717 [2024-11-18 12:05:22.426901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.717 [2024-11-18 12:05:22.433126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.717 [2024-11-18 12:05:22.433173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.717 [2024-11-18 12:05:22.433203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.717 [2024-11-18 12:05:22.439304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.717 [2024-11-18 12:05:22.439347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.717 [2024-11-18 12:05:22.439389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.717 [2024-11-18 12:05:22.445362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.717 [2024-11-18 12:05:22.445403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.717 [2024-11-18 12:05:22.445429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.717 [2024-11-18 12:05:22.451532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.717 [2024-11-18 12:05:22.451585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.717 [2024-11-18 12:05:22.451611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.717 [2024-11-18 12:05:22.457758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.717 [2024-11-18 12:05:22.457824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.717 [2024-11-18 12:05:22.457850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.718 [2024-11-18 12:05:22.464225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.718 [2024-11-18 12:05:22.464283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.718 [2024-11-18 12:05:22.464310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.718 [2024-11-18 12:05:22.470608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.718 [2024-11-18 12:05:22.470663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.718 [2024-11-18 12:05:22.470704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.718 [2024-11-18 12:05:22.476831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.718 [2024-11-18 12:05:22.476878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.718 [2024-11-18 12:05:22.476906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.718 [2024-11-18 12:05:22.483236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.718 [2024-11-18 12:05:22.483290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.718 [2024-11-18 12:05:22.483315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.718 [2024-11-18 12:05:22.489285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.718 [2024-11-18 12:05:22.489324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.718 [2024-11-18 12:05:22.489349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.718 [2024-11-18 12:05:22.495418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.718 [2024-11-18 12:05:22.495473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.718 [2024-11-18 12:05:22.495519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.718 [2024-11-18 12:05:22.501606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.718 [2024-11-18 12:05:22.501660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.718 [2024-11-18 12:05:22.501684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.718 [2024-11-18 12:05:22.507747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.718 [2024-11-18 12:05:22.507803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.718 [2024-11-18 12:05:22.507829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.718 [2024-11-18 12:05:22.513905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.718 [2024-11-18 12:05:22.513959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.718 [2024-11-18 12:05:22.513985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.718 [2024-11-18 12:05:22.520223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.718 [2024-11-18 12:05:22.520276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.718 [2024-11-18 12:05:22.520311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.718 [2024-11-18 12:05:22.526954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.718 [2024-11-18 12:05:22.527009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.718 [2024-11-18 12:05:22.527034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.718 [2024-11-18 12:05:22.533912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.718 [2024-11-18 12:05:22.533966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.718 [2024-11-18 12:05:22.533992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.718 [2024-11-18 12:05:22.540787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.718 [2024-11-18 12:05:22.540842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.718 [2024-11-18 12:05:22.540867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.718 [2024-11-18 12:05:22.547907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.718 [2024-11-18 12:05:22.547962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.718 [2024-11-18 12:05:22.547989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.718 [2024-11-18 12:05:22.554676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.718 [2024-11-18 12:05:22.554733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.718 [2024-11-18 12:05:22.554775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.718 [2024-11-18 12:05:22.560113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.718 [2024-11-18 12:05:22.560169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.718 [2024-11-18 12:05:22.560197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.718 [2024-11-18 12:05:22.564112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.718 [2024-11-18 12:05:22.564178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.718 [2024-11-18 12:05:22.564206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.718 [2024-11-18 12:05:22.569908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.718 [2024-11-18 12:05:22.569961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.718 [2024-11-18 12:05:22.569987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.718 [2024-11-18 12:05:22.576064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.718 [2024-11-18 12:05:22.576119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.718 [2024-11-18 12:05:22.576146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.718 [2024-11-18 12:05:22.582709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.718 [2024-11-18 12:05:22.582752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.718 [2024-11-18 12:05:22.582795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.718 [2024-11-18 12:05:22.588603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.718 [2024-11-18 12:05:22.588645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.718 [2024-11-18 12:05:22.588672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.718 [2024-11-18 12:05:22.594146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.718 [2024-11-18 12:05:22.594201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.718 [2024-11-18 12:05:22.594227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.718 [2024-11-18 12:05:22.598328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.718 [2024-11-18 12:05:22.598384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.718 [2024-11-18 12:05:22.598411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.978 [2024-11-18 12:05:22.604199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.978 [2024-11-18 12:05:22.604252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.978 [2024-11-18 12:05:22.604277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.978 [2024-11-18 12:05:22.610453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.978 [2024-11-18 12:05:22.610519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.978 [2024-11-18 12:05:22.610562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.978 [2024-11-18 12:05:22.616593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.978 [2024-11-18 12:05:22.616649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.978 [2024-11-18 12:05:22.616676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.978 [2024-11-18 12:05:22.622929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.978 [2024-11-18 12:05:22.622982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.978 [2024-11-18 12:05:22.623007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.978 [2024-11-18 12:05:22.629463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.978 [2024-11-18 12:05:22.629520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.978 [2024-11-18 12:05:22.629552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.978 [2024-11-18 12:05:22.636271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.978 [2024-11-18 12:05:22.636326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.978 [2024-11-18 12:05:22.636352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.978 [2024-11-18 12:05:22.642659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.978 [2024-11-18 12:05:22.642716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.978 [2024-11-18 12:05:22.642741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.978 [2024-11-18 12:05:22.649576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.978 [2024-11-18 12:05:22.649630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.978 [2024-11-18 12:05:22.649672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.978 [2024-11-18 12:05:22.656345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.978 [2024-11-18 12:05:22.656402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.978 [2024-11-18 12:05:22.656429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.978 [2024-11-18 12:05:22.663055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.978 [2024-11-18 12:05:22.663109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.978 [2024-11-18 12:05:22.663149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.978 [2024-11-18 12:05:22.669699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.978 [2024-11-18 12:05:22.669754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.978 [2024-11-18 12:05:22.669789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.978 [2024-11-18 12:05:22.676538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.978 [2024-11-18 12:05:22.676594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.978 [2024-11-18 12:05:22.676621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.978 [2024-11-18 12:05:22.683497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.978 [2024-11-18 12:05:22.683551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.978 [2024-11-18 12:05:22.683576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.978 [2024-11-18 12:05:22.690191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.978 [2024-11-18 12:05:22.690246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.978 [2024-11-18 12:05:22.690273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.979 [2024-11-18 12:05:22.696797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.979 [2024-11-18 12:05:22.696846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.979 [2024-11-18 12:05:22.696876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.979 [2024-11-18 12:05:22.703534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.979 [2024-11-18 12:05:22.703589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.979 [2024-11-18 12:05:22.703615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.979 [2024-11-18 12:05:22.710480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.979 [2024-11-18 12:05:22.710554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.979 [2024-11-18 12:05:22.710581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.979 [2024-11-18 12:05:22.717310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.979 [2024-11-18 12:05:22.717365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.979 [2024-11-18 12:05:22.717392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.979 [2024-11-18 12:05:22.723977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.979 [2024-11-18 12:05:22.724032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.979 [2024-11-18 12:05:22.724059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.979 [2024-11-18 12:05:22.730600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.979 [2024-11-18 12:05:22.730655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.979 [2024-11-18 12:05:22.730682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.979 [2024-11-18 12:05:22.737536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.979 [2024-11-18 12:05:22.737579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.979 [2024-11-18 12:05:22.737604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.979 [2024-11-18 12:05:22.744348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.979 [2024-11-18 12:05:22.744402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.979 [2024-11-18 12:05:22.744428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.979 [2024-11-18 12:05:22.751265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.979 [2024-11-18 12:05:22.751314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.979 [2024-11-18 12:05:22.751344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.979 [2024-11-18 12:05:22.758009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.979 [2024-11-18 12:05:22.758065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.979 [2024-11-18 12:05:22.758091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.979 [2024-11-18 12:05:22.764808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.979 [2024-11-18 12:05:22.764863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.979 [2024-11-18 12:05:22.764890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.979 [2024-11-18 12:05:22.771346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.979 [2024-11-18 12:05:22.771401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.979 [2024-11-18 12:05:22.771426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.979 [2024-11-18 12:05:22.778031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.979 [2024-11-18 12:05:22.778086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.979 [2024-11-18 12:05:22.778113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.979 [2024-11-18 12:05:22.784936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.979 [2024-11-18 12:05:22.784992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.979 [2024-11-18 12:05:22.785040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.979 [2024-11-18 12:05:22.792330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.979 [2024-11-18 12:05:22.792380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.979 [2024-11-18 12:05:22.792411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.979 [2024-11-18 12:05:22.798900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.979 [2024-11-18 12:05:22.798958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.979 [2024-11-18 12:05:22.798984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.979 [2024-11-18 12:05:22.805908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.979 [2024-11-18 12:05:22.805974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.979 [2024-11-18 12:05:22.806002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.979 [2024-11-18 12:05:22.812605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.979 [2024-11-18 12:05:22.812662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.979 [2024-11-18 12:05:22.812689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.979 [2024-11-18 12:05:22.819596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.979 [2024-11-18 12:05:22.819640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.979 [2024-11-18 12:05:22.819667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.979 [2024-11-18 12:05:22.826319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.979 [2024-11-18 12:05:22.826377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.979 [2024-11-18 12:05:22.826405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.979 [2024-11-18 12:05:22.830683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.979 [2024-11-18 12:05:22.830739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.979 [2024-11-18 12:05:22.830767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.979 [2024-11-18 12:05:22.837300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.979 [2024-11-18 12:05:22.837354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.979 [2024-11-18 12:05:22.837382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.979 [2024-11-18 12:05:22.844299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.979 [2024-11-18 12:05:22.844355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.979 [2024-11-18 12:05:22.844388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.979 [2024-11-18 12:05:22.850869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.979 [2024-11-18 12:05:22.850923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.979 [2024-11-18 12:05:22.850950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.979 [2024-11-18 12:05:22.857773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.979 [2024-11-18 12:05:22.857839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.979 [2024-11-18 12:05:22.857875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.238 [2024-11-18 12:05:22.865009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.238 [2024-11-18 12:05:22.865065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.238 [2024-11-18 12:05:22.865091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.238 [2024-11-18 12:05:22.871967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.238 [2024-11-18 12:05:22.872033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.238 [2024-11-18 12:05:22.872061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.238 [2024-11-18 12:05:22.877662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.238 [2024-11-18 12:05:22.877718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.238 [2024-11-18 12:05:22.877747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.238 [2024-11-18 12:05:22.884703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.238 [2024-11-18 12:05:22.884759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.238 [2024-11-18 12:05:22.884787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.238 [2024-11-18 12:05:22.891803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.238 [2024-11-18 12:05:22.891866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.238 [2024-11-18 12:05:22.891893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.238 [2024-11-18 12:05:22.898545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.238 [2024-11-18 12:05:22.898587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.238 [2024-11-18 12:05:22.898621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.238 [2024-11-18 12:05:22.904597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.238 [2024-11-18 12:05:22.904652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.238 [2024-11-18 12:05:22.904679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.238 [2024-11-18 12:05:22.908562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.238 [2024-11-18 12:05:22.908608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.238 [2024-11-18 12:05:22.908636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.238 [2024-11-18 12:05:22.914387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.238 [2024-11-18 12:05:22.914444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.239 [2024-11-18 12:05:22.914471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.239 [2024-11-18 12:05:22.920751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.239 [2024-11-18 12:05:22.920810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.239 [2024-11-18 12:05:22.920836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.239 [2024-11-18 12:05:22.927029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.239 [2024-11-18 12:05:22.927069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.239 [2024-11-18 12:05:22.927109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.239 [2024-11-18 12:05:22.933019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.239 [2024-11-18 12:05:22.933121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.239 [2024-11-18 12:05:22.933161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.239 4779.00 IOPS, 597.38 MiB/s 00:36:57.239 Latency(us) 00:36:57.239 [2024-11-18T11:05:23.124Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:57.239 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:57.239 nvme0n1 : 2.00 4779.48 597.44 0.00 0.00 3341.51 1128.68 15631.55 00:36:57.239 [2024-11-18T11:05:23.124Z] =================================================================================================================== 00:36:57.239 [2024-11-18T11:05:23.124Z] Total : 4779.48 597.44 0.00 0.00 3341.51 1128.68 15631.55 00:36:57.239 { 00:36:57.239 "results": [ 00:36:57.239 { 00:36:57.239 "job": "nvme0n1", 00:36:57.239 "core_mask": "0x2", 00:36:57.239 "workload": "randread", 00:36:57.239 "status": "finished", 00:36:57.239 "queue_depth": 16, 00:36:57.239 "io_size": 131072, 00:36:57.239 "runtime": 2.003146, 00:36:57.239 "iops": 4779.481875010609, 00:36:57.239 "mibps": 597.4352343763261, 00:36:57.239 "io_failed": 0, 00:36:57.239 "io_timeout": 0, 00:36:57.239 "avg_latency_us": 3341.5071596685466, 00:36:57.239 "min_latency_us": 1128.6755555555555, 00:36:57.239 "max_latency_us": 15631.54962962963 00:36:57.239 } 00:36:57.239 ], 00:36:57.239 "core_count": 1 00:36:57.239 } 00:36:57.239 12:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:57.239 12:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:57.239 12:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:57.239 12:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:57.239 | .driver_specific 00:36:57.239 | .nvme_error 00:36:57.239 | .status_code 00:36:57.239 | .command_transient_transport_error' 00:36:57.497 12:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 309 > 0 )) 00:36:57.497 12:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3133079 00:36:57.497 12:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3133079 ']' 00:36:57.497 12:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3133079 00:36:57.497 12:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:57.497 12:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:57.497 12:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3133079 00:36:57.497 12:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:57.497 12:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:57.497 12:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3133079' 00:36:57.497 killing process with pid 3133079 00:36:57.497 12:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3133079 00:36:57.497 Received shutdown signal, test time was about 2.000000 seconds 00:36:57.497 00:36:57.497 Latency(us) 00:36:57.497 [2024-11-18T11:05:23.382Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:57.498 [2024-11-18T11:05:23.383Z] =================================================================================================================== 00:36:57.498 [2024-11-18T11:05:23.383Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:57.498 12:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3133079 00:36:58.432 12:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:36:58.432 12:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:58.432 12:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:58.432 12:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:58.432 12:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:58.432 12:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3133740 00:36:58.432 12:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3133740 /var/tmp/bperf.sock 00:36:58.432 12:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:36:58.432 12:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3133740 ']' 00:36:58.432 12:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:58.432 12:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:58.432 12:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:58.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:58.432 12:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:58.432 12:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:58.432 [2024-11-18 12:05:24.258972] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:36:58.432 [2024-11-18 12:05:24.259112] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3133740 ] 00:36:58.690 [2024-11-18 12:05:24.398350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:58.690 [2024-11-18 12:05:24.532370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:59.624 12:05:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:59.625 12:05:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:59.625 12:05:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:59.625 12:05:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:59.883 12:05:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:59.883 12:05:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.883 12:05:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:59.883 12:05:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.883 12:05:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:59.883 12:05:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:00.141 nvme0n1 00:37:00.141 12:05:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:37:00.141 12:05:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.141 12:05:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:00.141 12:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.141 12:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:00.141 12:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:00.400 Running I/O for 2 seconds... 00:37:00.400 [2024-11-18 12:05:26.133221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bef6a8 00:37:00.400 [2024-11-18 12:05:26.135107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.400 [2024-11-18 12:05:26.135168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:37:00.400 [2024-11-18 12:05:26.148914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde8a8 00:37:00.400 [2024-11-18 12:05:26.150688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.400 [2024-11-18 12:05:26.150731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:37:00.400 [2024-11-18 12:05:26.165771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6fa8 00:37:00.400 [2024-11-18 12:05:26.167210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.400 [2024-11-18 12:05:26.167255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:37:00.400 [2024-11-18 12:05:26.186247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfe2e8 00:37:00.400 [2024-11-18 12:05:26.188615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.400 [2024-11-18 12:05:26.188672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:00.400 [2024-11-18 12:05:26.203089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0788 00:37:00.400 [2024-11-18 12:05:26.205679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.400 [2024-11-18 12:05:26.205721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:37:00.400 [2024-11-18 12:05:26.214790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6fa8 00:37:00.400 [2024-11-18 12:05:26.215906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.400 [2024-11-18 12:05:26.215950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:37:00.400 [2024-11-18 12:05:26.231792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5a90 00:37:00.400 [2024-11-18 12:05:26.232523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.400 [2024-11-18 12:05:26.232580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:37:00.400 [2024-11-18 12:05:26.250596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1868 00:37:00.400 [2024-11-18 12:05:26.252654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.400 [2024-11-18 12:05:26.252710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:37:00.400 [2024-11-18 12:05:26.263612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6738 00:37:00.400 [2024-11-18 12:05:26.264769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.400 [2024-11-18 12:05:26.264823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:37:00.400 [2024-11-18 12:05:26.280598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfda78 00:37:00.400 [2024-11-18 12:05:26.282024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.400 [2024-11-18 12:05:26.282070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:37:00.658 [2024-11-18 12:05:26.297102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7538 00:37:00.659 [2024-11-18 12:05:26.298653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.659 [2024-11-18 12:05:26.298698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:37:00.659 [2024-11-18 12:05:26.313197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf5be8 00:37:00.659 [2024-11-18 12:05:26.314904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.659 [2024-11-18 12:05:26.314945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:37:00.659 [2024-11-18 12:05:26.329502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beaab8 00:37:00.659 [2024-11-18 12:05:26.331194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.659 [2024-11-18 12:05:26.331239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:37:00.659 [2024-11-18 12:05:26.344358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5ec8 00:37:00.659 [2024-11-18 12:05:26.345354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.659 [2024-11-18 12:05:26.345399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:37:00.659 [2024-11-18 12:05:26.360619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beb328 00:37:00.659 [2024-11-18 12:05:26.361408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.659 [2024-11-18 12:05:26.361470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:37:00.659 [2024-11-18 12:05:26.379109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bed4e8 00:37:00.659 [2024-11-18 12:05:26.381160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.659 [2024-11-18 12:05:26.381205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:37:00.659 [2024-11-18 12:05:26.394214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6890 00:37:00.659 [2024-11-18 12:05:26.396009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.659 [2024-11-18 12:05:26.396050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:37:00.659 [2024-11-18 12:05:26.410695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfd640 00:37:00.659 [2024-11-18 12:05:26.412252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.659 [2024-11-18 12:05:26.412296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:00.659 [2024-11-18 12:05:26.425710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf35f0 00:37:00.659 [2024-11-18 12:05:26.427272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.659 [2024-11-18 12:05:26.427325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:37:00.659 [2024-11-18 12:05:26.443119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfe720 00:37:00.659 [2024-11-18 12:05:26.444906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.659 [2024-11-18 12:05:26.444950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:00.659 [2024-11-18 12:05:26.459564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfc560 00:37:00.659 [2024-11-18 12:05:26.461579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.659 [2024-11-18 12:05:26.461632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:00.659 [2024-11-18 12:05:26.474516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf57b0 00:37:00.659 [2024-11-18 12:05:26.476514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.659 [2024-11-18 12:05:26.476571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:37:00.659 [2024-11-18 12:05:26.490866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfa7d8 00:37:00.659 [2024-11-18 12:05:26.492808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.659 [2024-11-18 12:05:26.492847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:37:00.659 [2024-11-18 12:05:26.506498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5ec8 00:37:00.659 [2024-11-18 12:05:26.507782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.659 [2024-11-18 12:05:26.507838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:37:00.659 [2024-11-18 12:05:26.521642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016becc78 00:37:00.659 [2024-11-18 12:05:26.524084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.659 [2024-11-18 12:05:26.524123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:00.659 [2024-11-18 12:05:26.537921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:37:00.659 [2024-11-18 12:05:26.539770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.659 [2024-11-18 12:05:26.539810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:00.918 [2024-11-18 12:05:26.553752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf46d0 00:37:00.918 [2024-11-18 12:05:26.555326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.918 [2024-11-18 12:05:26.555370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:00.918 [2024-11-18 12:05:26.569659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf4b08 00:37:00.918 [2024-11-18 12:05:26.571447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.918 [2024-11-18 12:05:26.571502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:37:00.918 [2024-11-18 12:05:26.585956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6458 00:37:00.918 [2024-11-18 12:05:26.587513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.918 [2024-11-18 12:05:26.587575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:37:00.918 [2024-11-18 12:05:26.603027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfda78 00:37:00.918 [2024-11-18 12:05:26.604925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.918 [2024-11-18 12:05:26.604971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:37:00.918 [2024-11-18 12:05:26.618469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be84c0 00:37:00.918 [2024-11-18 12:05:26.621148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.918 [2024-11-18 12:05:26.621189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:00.918 [2024-11-18 12:05:26.634117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb048 00:37:00.918 [2024-11-18 12:05:26.635316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.918 [2024-11-18 12:05:26.635360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:37:00.918 [2024-11-18 12:05:26.651239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0a68 00:37:00.918 [2024-11-18 12:05:26.652938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.918 [2024-11-18 12:05:26.652983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:37:00.918 [2024-11-18 12:05:26.668190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf4f40 00:37:00.918 [2024-11-18 12:05:26.669349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.918 [2024-11-18 12:05:26.669394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:37:00.918 [2024-11-18 12:05:26.685305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0bc0 00:37:00.918 [2024-11-18 12:05:26.686897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.918 [2024-11-18 12:05:26.686943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:37:00.918 [2024-11-18 12:05:26.702044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5658 00:37:00.918 [2024-11-18 12:05:26.703796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.918 [2024-11-18 12:05:26.703850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:00.918 [2024-11-18 12:05:26.717179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be1710 00:37:00.918 [2024-11-18 12:05:26.718919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.918 [2024-11-18 12:05:26.718962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:00.918 [2024-11-18 12:05:26.733549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf96f8 00:37:00.918 [2024-11-18 12:05:26.734608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.918 [2024-11-18 12:05:26.734649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:37:00.918 [2024-11-18 12:05:26.752781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be7c50 00:37:00.918 [2024-11-18 12:05:26.755393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.918 [2024-11-18 12:05:26.755437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:37:00.918 [2024-11-18 12:05:26.764068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfa3a0 00:37:00.918 [2024-11-18 12:05:26.765045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.918 [2024-11-18 12:05:26.765099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:00.918 [2024-11-18 12:05:26.778931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6458 00:37:00.918 [2024-11-18 12:05:26.780064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.919 [2024-11-18 12:05:26.780108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:37:00.919 [2024-11-18 12:05:26.794965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beea00 00:37:00.919 [2024-11-18 12:05:26.796108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.919 [2024-11-18 12:05:26.796152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:37:01.177 [2024-11-18 12:05:26.813780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf3a28 00:37:01.177 [2024-11-18 12:05:26.815577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.177 [2024-11-18 12:05:26.815632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:37:01.177 [2024-11-18 12:05:26.832012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb480 00:37:01.177 [2024-11-18 12:05:26.834590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.177 [2024-11-18 12:05:26.834628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.177 [2024-11-18 12:05:26.844622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6300 00:37:01.177 [2024-11-18 12:05:26.846340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.177 [2024-11-18 12:05:26.846384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:01.177 [2024-11-18 12:05:26.859535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7970 00:37:01.177 [2024-11-18 12:05:26.860645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.177 [2024-11-18 12:05:26.860699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:37:01.177 [2024-11-18 12:05:26.875797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfac10 00:37:01.177 [2024-11-18 12:05:26.876654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.177 [2024-11-18 12:05:26.876695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:01.177 [2024-11-18 12:05:26.892187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be4140 00:37:01.177 [2024-11-18 12:05:26.893526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.177 [2024-11-18 12:05:26.893582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:01.177 [2024-11-18 12:05:26.908641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5a90 00:37:01.177 [2024-11-18 12:05:26.910174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.177 [2024-11-18 12:05:26.910218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:01.177 [2024-11-18 12:05:26.924468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf81e0 00:37:01.177 [2024-11-18 12:05:26.926205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.177 [2024-11-18 12:05:26.926249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:37:01.177 [2024-11-18 12:05:26.940577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be84c0 00:37:01.177 [2024-11-18 12:05:26.941659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.177 [2024-11-18 12:05:26.941714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:01.177 [2024-11-18 12:05:26.959599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfbcf0 00:37:01.177 [2024-11-18 12:05:26.962125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.177 [2024-11-18 12:05:26.962170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:01.177 [2024-11-18 12:05:26.970776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be8d30 00:37:01.177 [2024-11-18 12:05:26.971936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.177 [2024-11-18 12:05:26.971980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:37:01.177 [2024-11-18 12:05:26.987185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6fa8 00:37:01.177 [2024-11-18 12:05:26.988044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.177 [2024-11-18 12:05:26.988090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:01.177 [2024-11-18 12:05:27.006206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfa7d8 00:37:01.177 [2024-11-18 12:05:27.008539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.177 [2024-11-18 12:05:27.008579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:01.177 [2024-11-18 12:05:27.022925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfe720 00:37:01.177 [2024-11-18 12:05:27.025473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.177 [2024-11-18 12:05:27.025540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:01.177 [2024-11-18 12:05:27.035678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be7c50 00:37:01.177 [2024-11-18 12:05:27.037357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.177 [2024-11-18 12:05:27.037401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:37:01.177 [2024-11-18 12:05:27.050901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be2c28 00:37:01.177 [2024-11-18 12:05:27.053126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.177 [2024-11-18 12:05:27.053165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:01.436 [2024-11-18 12:05:27.065583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5220 00:37:01.436 [2024-11-18 12:05:27.066656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.436 [2024-11-18 12:05:27.066709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:37:01.436 [2024-11-18 12:05:27.082540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bea680 00:37:01.436 [2024-11-18 12:05:27.084010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.436 [2024-11-18 12:05:27.084054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:01.436 [2024-11-18 12:05:27.098678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb480 00:37:01.436 [2024-11-18 12:05:27.100144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.436 [2024-11-18 12:05:27.100187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:37:01.436 15695.00 IOPS, 61.31 MiB/s [2024-11-18T11:05:27.321Z] [2024-11-18 12:05:27.115552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2948 00:37:01.436 [2024-11-18 12:05:27.117010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.436 [2024-11-18 12:05:27.117063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:01.436 [2024-11-18 12:05:27.130560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0350 00:37:01.436 [2024-11-18 12:05:27.132032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.436 [2024-11-18 12:05:27.132073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:01.436 [2024-11-18 12:05:27.146447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be8088 00:37:01.436 [2024-11-18 12:05:27.147763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.436 [2024-11-18 12:05:27.147817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:37:01.436 [2024-11-18 12:05:27.162686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5ec8 00:37:01.436 [2024-11-18 12:05:27.164462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.436 [2024-11-18 12:05:27.164514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:37:01.436 [2024-11-18 12:05:27.182516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bed0b0 00:37:01.436 [2024-11-18 12:05:27.185152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.436 [2024-11-18 12:05:27.185196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:01.436 [2024-11-18 12:05:27.194173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2948 00:37:01.436 [2024-11-18 12:05:27.195570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.436 [2024-11-18 12:05:27.195626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:37:01.436 [2024-11-18 12:05:27.215039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf3a28 00:37:01.436 [2024-11-18 12:05:27.217472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.436 [2024-11-18 12:05:27.217521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:01.436 [2024-11-18 12:05:27.226294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf9b30 00:37:01.436 [2024-11-18 12:05:27.227548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.436 [2024-11-18 12:05:27.227588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:37:01.436 [2024-11-18 12:05:27.246691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2d80 00:37:01.436 [2024-11-18 12:05:27.249138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.436 [2024-11-18 12:05:27.249182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:01.436 [2024-11-18 12:05:27.257600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2d80 00:37:01.436 [2024-11-18 12:05:27.258754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.436 [2024-11-18 12:05:27.258808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:37:01.436 [2024-11-18 12:05:27.277664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6cc8 00:37:01.437 [2024-11-18 12:05:27.279684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.437 [2024-11-18 12:05:27.279738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:01.437 [2024-11-18 12:05:27.292462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1ca0 00:37:01.437 [2024-11-18 12:05:27.294804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.437 [2024-11-18 12:05:27.294845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:01.437 [2024-11-18 12:05:27.310632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfc998 00:37:01.437 [2024-11-18 12:05:27.312650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.437 [2024-11-18 12:05:27.312703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:37:01.695 [2024-11-18 12:05:27.323246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf20d8 00:37:01.695 [2024-11-18 12:05:27.324448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.695 [2024-11-18 12:05:27.324499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:01.695 [2024-11-18 12:05:27.340684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf5378 00:37:01.695 [2024-11-18 12:05:27.341655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.695 [2024-11-18 12:05:27.341711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:37:01.695 [2024-11-18 12:05:27.360223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1430 00:37:01.695 [2024-11-18 12:05:27.362712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.695 [2024-11-18 12:05:27.362753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:37:01.695 [2024-11-18 12:05:27.371967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1430 00:37:01.695 [2024-11-18 12:05:27.373154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.695 [2024-11-18 12:05:27.373197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:01.695 [2024-11-18 12:05:27.392105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beb328 00:37:01.695 [2024-11-18 12:05:27.393969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.695 [2024-11-18 12:05:27.394036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:01.695 [2024-11-18 12:05:27.407454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be73e0 00:37:01.695 [2024-11-18 12:05:27.409922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.695 [2024-11-18 12:05:27.409961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:01.695 [2024-11-18 12:05:27.425667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be49b0 00:37:01.695 [2024-11-18 12:05:27.427778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.695 [2024-11-18 12:05:27.427830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:01.695 [2024-11-18 12:05:27.442272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfc128 00:37:01.695 [2024-11-18 12:05:27.444591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.695 [2024-11-18 12:05:27.444631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:37:01.695 [2024-11-18 12:05:27.457958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb480 00:37:01.695 [2024-11-18 12:05:27.459973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.695 [2024-11-18 12:05:27.460012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:01.695 [2024-11-18 12:05:27.474581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7da8 00:37:01.695 [2024-11-18 12:05:27.476233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.695 [2024-11-18 12:05:27.476277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:01.695 [2024-11-18 12:05:27.489837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:37:01.695 [2024-11-18 12:05:27.492119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.695 [2024-11-18 12:05:27.492158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:37:01.695 [2024-11-18 12:05:27.508949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be2c28 00:37:01.695 [2024-11-18 12:05:27.511341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.695 [2024-11-18 12:05:27.511385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:37:01.695 [2024-11-18 12:05:27.520597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bed920 00:37:01.695 [2024-11-18 12:05:27.521759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.695 [2024-11-18 12:05:27.521824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:37:01.695 [2024-11-18 12:05:27.540395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:37:01.695 [2024-11-18 12:05:27.542418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.695 [2024-11-18 12:05:27.542463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:37:01.695 [2024-11-18 12:05:27.555253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bea680 00:37:01.695 [2024-11-18 12:05:27.556962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.695 [2024-11-18 12:05:27.557002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:37:01.695 [2024-11-18 12:05:27.571418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be3d08 00:37:01.696 [2024-11-18 12:05:27.573039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.696 [2024-11-18 12:05:27.573082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:37:01.954 [2024-11-18 12:05:27.587096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfd208 00:37:01.954 [2024-11-18 12:05:27.588166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.954 [2024-11-18 12:05:27.588220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:37:01.954 [2024-11-18 12:05:27.602168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf4f40 00:37:01.954 [2024-11-18 12:05:27.603062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.954 [2024-11-18 12:05:27.603102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:37:01.954 [2024-11-18 12:05:27.618483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beaab8 00:37:01.954 [2024-11-18 12:05:27.620009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.954 [2024-11-18 12:05:27.620054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:01.954 [2024-11-18 12:05:27.635882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be38d0 00:37:01.954 [2024-11-18 12:05:27.637175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.954 [2024-11-18 12:05:27.637231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:37:01.954 [2024-11-18 12:05:27.651006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be23b8 00:37:01.954 [2024-11-18 12:05:27.652115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.954 [2024-11-18 12:05:27.652154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:37:01.954 [2024-11-18 12:05:27.667279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfc560 00:37:01.954 [2024-11-18 12:05:27.668773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.954 [2024-11-18 12:05:27.668829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:37:01.954 [2024-11-18 12:05:27.683085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be95a0 00:37:01.954 [2024-11-18 12:05:27.684879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.954 [2024-11-18 12:05:27.684924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:37:01.954 [2024-11-18 12:05:27.703073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7100 00:37:01.954 [2024-11-18 12:05:27.705595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.955 [2024-11-18 12:05:27.705646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:01.955 [2024-11-18 12:05:27.715736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfd208 00:37:01.955 [2024-11-18 12:05:27.717433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.955 [2024-11-18 12:05:27.717476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:37:01.955 [2024-11-18 12:05:27.735448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beff18 00:37:01.955 [2024-11-18 12:05:27.738029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.955 [2024-11-18 12:05:27.738073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:01.955 [2024-11-18 12:05:27.747165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beee38 00:37:01.955 [2024-11-18 12:05:27.748522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.955 [2024-11-18 12:05:27.748578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:37:01.955 [2024-11-18 12:05:27.767151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdf550 00:37:01.955 [2024-11-18 12:05:27.769394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.955 [2024-11-18 12:05:27.769437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:37:01.955 [2024-11-18 12:05:27.782207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bef6a8 00:37:01.955 [2024-11-18 12:05:27.784211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.955 [2024-11-18 12:05:27.784276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:37:01.955 [2024-11-18 12:05:27.798586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0ea0 00:37:01.955 [2024-11-18 12:05:27.800382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.955 [2024-11-18 12:05:27.800425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:37:01.955 [2024-11-18 12:05:27.814812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beea00 00:37:01.955 [2024-11-18 12:05:27.816897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.955 [2024-11-18 12:05:27.816960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:01.955 [2024-11-18 12:05:27.832433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bed0b0 00:37:01.955 [2024-11-18 12:05:27.834676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.955 [2024-11-18 12:05:27.834730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:37:02.213 [2024-11-18 12:05:27.847563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be9e10 00:37:02.213 [2024-11-18 12:05:27.849577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.213 [2024-11-18 12:05:27.849629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:02.213 [2024-11-18 12:05:27.863853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beea00 00:37:02.213 [2024-11-18 12:05:27.865524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.213 [2024-11-18 12:05:27.865590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:02.213 [2024-11-18 12:05:27.879828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5ec8 00:37:02.213 [2024-11-18 12:05:27.881690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.213 [2024-11-18 12:05:27.881744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:37:02.213 [2024-11-18 12:05:27.896200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bef6a8 00:37:02.213 [2024-11-18 12:05:27.897442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.213 [2024-11-18 12:05:27.897487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:37:02.213 [2024-11-18 12:05:27.912267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdece0 00:37:02.214 [2024-11-18 12:05:27.913852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.214 [2024-11-18 12:05:27.913907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:02.214 [2024-11-18 12:05:27.928047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1868 00:37:02.214 [2024-11-18 12:05:27.929907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.214 [2024-11-18 12:05:27.929952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:37:02.214 [2024-11-18 12:05:27.944100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2948 00:37:02.214 [2024-11-18 12:05:27.945892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.214 [2024-11-18 12:05:27.945937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:37:02.214 [2024-11-18 12:05:27.959637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf5378 00:37:02.214 [2024-11-18 12:05:27.962103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.214 [2024-11-18 12:05:27.962141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:02.214 [2024-11-18 12:05:27.977707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bed0b0 00:37:02.214 [2024-11-18 12:05:27.979864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.214 [2024-11-18 12:05:27.979923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:37:02.214 [2024-11-18 12:05:27.992356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be9e10 00:37:02.214 [2024-11-18 12:05:27.994964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.214 [2024-11-18 12:05:27.995009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:02.214 [2024-11-18 12:05:28.010151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf5378 00:37:02.214 [2024-11-18 12:05:28.012365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.214 [2024-11-18 12:05:28.012409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:02.214 [2024-11-18 12:05:28.021846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8a50 00:37:02.214 [2024-11-18 12:05:28.022852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.214 [2024-11-18 12:05:28.022896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:02.214 [2024-11-18 12:05:28.037988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf9f68 00:37:02.214 [2024-11-18 12:05:28.038962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.214 [2024-11-18 12:05:28.039008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:37:02.214 [2024-11-18 12:05:28.058330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfac10 00:37:02.214 [2024-11-18 12:05:28.060548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.214 [2024-11-18 12:05:28.060588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:02.214 [2024-11-18 12:05:28.071062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfac10 00:37:02.214 [2024-11-18 12:05:28.072477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.214 [2024-11-18 12:05:28.072542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:37:02.214 [2024-11-18 12:05:28.090770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb8b8 00:37:02.214 [2024-11-18 12:05:28.093013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.214 [2024-11-18 12:05:28.093066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:02.472 [2024-11-18 12:05:28.102484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016befae0 00:37:02.472 [2024-11-18 12:05:28.103535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.472 [2024-11-18 12:05:28.103590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:02.472 15719.00 IOPS, 61.40 MiB/s [2024-11-18T11:05:28.357Z] [2024-11-18 12:05:28.121115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be8088 00:37:02.472 [2024-11-18 12:05:28.122145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.472 [2024-11-18 12:05:28.122190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:02.472 00:37:02.472 Latency(us) 00:37:02.472 [2024-11-18T11:05:28.357Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:02.472 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:02.472 nvme0n1 : 2.01 15734.11 61.46 0.00 0.00 8116.89 3713.71 20777.34 00:37:02.472 [2024-11-18T11:05:28.357Z] =================================================================================================================== 00:37:02.472 [2024-11-18T11:05:28.357Z] Total : 15734.11 61.46 0.00 0.00 8116.89 3713.71 20777.34 00:37:02.472 { 00:37:02.472 "results": [ 00:37:02.472 { 00:37:02.472 "job": "nvme0n1", 00:37:02.472 "core_mask": "0x2", 00:37:02.472 "workload": "randwrite", 00:37:02.472 "status": "finished", 00:37:02.472 "queue_depth": 128, 00:37:02.472 "io_size": 4096, 00:37:02.472 "runtime": 2.009202, 00:37:02.472 "iops": 15734.107371981512, 00:37:02.472 "mibps": 61.46135692180278, 00:37:02.472 "io_failed": 0, 00:37:02.472 "io_timeout": 0, 00:37:02.472 "avg_latency_us": 8116.88955903045, 00:37:02.472 "min_latency_us": 3713.7066666666665, 00:37:02.472 "max_latency_us": 20777.33925925926 00:37:02.472 } 00:37:02.472 ], 00:37:02.472 "core_count": 1 00:37:02.472 } 00:37:02.472 12:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:02.472 12:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:02.472 12:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:02.472 12:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:02.472 | .driver_specific 00:37:02.472 | .nvme_error 00:37:02.472 | .status_code 00:37:02.472 | .command_transient_transport_error' 00:37:02.731 12:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 124 > 0 )) 00:37:02.731 12:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3133740 00:37:02.731 12:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3133740 ']' 00:37:02.731 12:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3133740 00:37:02.731 12:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:02.731 12:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:02.731 12:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3133740 00:37:02.731 12:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:02.731 12:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:02.731 12:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3133740' 00:37:02.731 killing process with pid 3133740 00:37:02.731 12:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3133740 00:37:02.731 Received shutdown signal, test time was about 2.000000 seconds 00:37:02.731 00:37:02.731 Latency(us) 00:37:02.731 [2024-11-18T11:05:28.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:02.731 [2024-11-18T11:05:28.616Z] =================================================================================================================== 00:37:02.731 [2024-11-18T11:05:28.616Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:02.731 12:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3133740 00:37:03.665 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:37:03.665 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:03.665 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:37:03.665 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:37:03.665 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:37:03.665 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3134289 00:37:03.665 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:37:03.665 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3134289 /var/tmp/bperf.sock 00:37:03.665 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3134289 ']' 00:37:03.665 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:03.666 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:03.666 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:03.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:03.666 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:03.666 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:03.666 [2024-11-18 12:05:29.416322] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:37:03.666 [2024-11-18 12:05:29.416472] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3134289 ] 00:37:03.666 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:03.666 Zero copy mechanism will not be used. 00:37:03.666 [2024-11-18 12:05:29.550457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:03.923 [2024-11-18 12:05:29.680435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:04.857 12:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:04.857 12:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:04.857 12:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:04.857 12:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:04.857 12:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:04.857 12:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.857 12:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:04.857 12:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.857 12:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:04.857 12:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:05.423 nvme0n1 00:37:05.423 12:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:05.423 12:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.423 12:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:05.423 12:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.423 12:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:05.423 12:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:05.423 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:05.423 Zero copy mechanism will not be used. 00:37:05.423 Running I/O for 2 seconds... 00:37:05.423 [2024-11-18 12:05:31.272545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.423 [2024-11-18 12:05:31.272682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.423 [2024-11-18 12:05:31.272735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:05.423 [2024-11-18 12:05:31.279762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.423 [2024-11-18 12:05:31.279867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.423 [2024-11-18 12:05:31.279911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:05.423 [2024-11-18 12:05:31.286668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.423 [2024-11-18 12:05:31.286768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.423 [2024-11-18 12:05:31.286809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:05.423 [2024-11-18 12:05:31.293657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.423 [2024-11-18 12:05:31.293758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.423 [2024-11-18 12:05:31.293808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:05.423 [2024-11-18 12:05:31.300558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.423 [2024-11-18 12:05:31.300662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.423 [2024-11-18 12:05:31.300703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:05.423 [2024-11-18 12:05:31.307562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.423 [2024-11-18 12:05:31.307667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.423 [2024-11-18 12:05:31.307708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:05.682 [2024-11-18 12:05:31.314419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.682 [2024-11-18 12:05:31.314642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.682 [2024-11-18 12:05:31.314684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:05.682 [2024-11-18 12:05:31.322667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.682 [2024-11-18 12:05:31.322796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.682 [2024-11-18 12:05:31.322835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:05.682 [2024-11-18 12:05:31.329857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.682 [2024-11-18 12:05:31.330076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.682 [2024-11-18 12:05:31.330117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:05.682 [2024-11-18 12:05:31.336761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.682 [2024-11-18 12:05:31.336985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.682 [2024-11-18 12:05:31.337026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:05.682 [2024-11-18 12:05:31.343644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.682 [2024-11-18 12:05:31.343798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.682 [2024-11-18 12:05:31.343837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:05.682 [2024-11-18 12:05:31.351016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.682 [2024-11-18 12:05:31.351239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.682 [2024-11-18 12:05:31.351280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:05.682 [2024-11-18 12:05:31.358934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.682 [2024-11-18 12:05:31.359055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.682 [2024-11-18 12:05:31.359094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:05.682 [2024-11-18 12:05:31.366464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.682 [2024-11-18 12:05:31.366595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.682 [2024-11-18 12:05:31.366643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:05.682 [2024-11-18 12:05:31.373511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.682 [2024-11-18 12:05:31.373732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.682 [2024-11-18 12:05:31.373790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:05.682 [2024-11-18 12:05:31.380572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.682 [2024-11-18 12:05:31.380789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.682 [2024-11-18 12:05:31.380831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:05.682 [2024-11-18 12:05:31.387640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.682 [2024-11-18 12:05:31.387861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.682 [2024-11-18 12:05:31.387902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:05.682 [2024-11-18 12:05:31.394442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.682 [2024-11-18 12:05:31.394605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.682 [2024-11-18 12:05:31.394643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:05.682 [2024-11-18 12:05:31.401197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.682 [2024-11-18 12:05:31.401370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.682 [2024-11-18 12:05:31.401408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:05.682 [2024-11-18 12:05:31.408319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.682 [2024-11-18 12:05:31.408514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.682 [2024-11-18 12:05:31.408553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:05.682 [2024-11-18 12:05:31.415446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.682 [2024-11-18 12:05:31.415585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.682 [2024-11-18 12:05:31.415624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:05.683 [2024-11-18 12:05:31.422390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.683 [2024-11-18 12:05:31.422523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.683 [2024-11-18 12:05:31.422562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:05.683 [2024-11-18 12:05:31.429746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.683 [2024-11-18 12:05:31.429962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.683 [2024-11-18 12:05:31.430001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:05.683 [2024-11-18 12:05:31.437268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.683 [2024-11-18 12:05:31.437483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.683 [2024-11-18 12:05:31.437532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:05.683 [2024-11-18 12:05:31.444425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.683 [2024-11-18 12:05:31.444540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.683 [2024-11-18 12:05:31.444579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:05.683 [2024-11-18 12:05:31.451375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.683 [2024-11-18 12:05:31.451528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.683 [2024-11-18 12:05:31.451567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:05.683 [2024-11-18 12:05:31.458614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.683 [2024-11-18 12:05:31.458823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.683 [2024-11-18 12:05:31.458863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:05.683 [2024-11-18 12:05:31.465662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.683 [2024-11-18 12:05:31.465814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.683 [2024-11-18 12:05:31.465853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:05.683 [2024-11-18 12:05:31.472415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.683 [2024-11-18 12:05:31.472549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.683 [2024-11-18 12:05:31.472588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:05.683 [2024-11-18 12:05:31.479297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.683 [2024-11-18 12:05:31.479445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.683 [2024-11-18 12:05:31.479484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:05.683 [2024-11-18 12:05:31.486337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.683 [2024-11-18 12:05:31.486522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.683 [2024-11-18 12:05:31.486571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:05.683 [2024-11-18 12:05:31.493353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.683 [2024-11-18 12:05:31.493509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.683 [2024-11-18 12:05:31.493548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:05.683 [2024-11-18 12:05:31.500249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.683 [2024-11-18 12:05:31.500448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.683 [2024-11-18 12:05:31.500488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:05.683 [2024-11-18 12:05:31.507262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.683 [2024-11-18 12:05:31.507454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.683 [2024-11-18 12:05:31.507500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:05.683 [2024-11-18 12:05:31.514030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.683 [2024-11-18 12:05:31.514199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.683 [2024-11-18 12:05:31.514237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:05.683 [2024-11-18 12:05:31.520765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.683 [2024-11-18 12:05:31.520919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.683 [2024-11-18 12:05:31.520958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:05.683 [2024-11-18 12:05:31.527484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.683 [2024-11-18 12:05:31.527662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.683 [2024-11-18 12:05:31.527700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:05.683 [2024-11-18 12:05:31.534435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.683 [2024-11-18 12:05:31.534588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.683 [2024-11-18 12:05:31.534629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:05.683 [2024-11-18 12:05:31.541285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.683 [2024-11-18 12:05:31.541517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.683 [2024-11-18 12:05:31.541555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:05.683 [2024-11-18 12:05:31.548152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.683 [2024-11-18 12:05:31.548337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.683 [2024-11-18 12:05:31.548375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:05.683 [2024-11-18 12:05:31.555142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.683 [2024-11-18 12:05:31.555305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.683 [2024-11-18 12:05:31.555343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:05.683 [2024-11-18 12:05:31.562032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.683 [2024-11-18 12:05:31.562175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.683 [2024-11-18 12:05:31.562214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:05.942 [2024-11-18 12:05:31.569030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.942 [2024-11-18 12:05:31.569198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.942 [2024-11-18 12:05:31.569238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:05.942 [2024-11-18 12:05:31.576086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.942 [2024-11-18 12:05:31.576250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.942 [2024-11-18 12:05:31.576289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:05.942 [2024-11-18 12:05:31.583120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.942 [2024-11-18 12:05:31.583333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.942 [2024-11-18 12:05:31.583375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:05.942 [2024-11-18 12:05:31.589989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.942 [2024-11-18 12:05:31.590141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.942 [2024-11-18 12:05:31.590179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:05.942 [2024-11-18 12:05:31.597022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.942 [2024-11-18 12:05:31.597164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.942 [2024-11-18 12:05:31.597203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:05.942 [2024-11-18 12:05:31.604047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.942 [2024-11-18 12:05:31.604206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.942 [2024-11-18 12:05:31.604246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:05.942 [2024-11-18 12:05:31.611026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.942 [2024-11-18 12:05:31.611187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.942 [2024-11-18 12:05:31.611228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:05.942 [2024-11-18 12:05:31.618028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.942 [2024-11-18 12:05:31.618173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.942 [2024-11-18 12:05:31.618212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:05.942 [2024-11-18 12:05:31.624884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.942 [2024-11-18 12:05:31.625053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.942 [2024-11-18 12:05:31.625091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:05.942 [2024-11-18 12:05:31.631751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.942 [2024-11-18 12:05:31.631936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.942 [2024-11-18 12:05:31.631976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:05.942 [2024-11-18 12:05:31.638405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.942 [2024-11-18 12:05:31.638561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.942 [2024-11-18 12:05:31.638600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:05.942 [2024-11-18 12:05:31.645386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.942 [2024-11-18 12:05:31.645545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.942 [2024-11-18 12:05:31.645585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:05.942 [2024-11-18 12:05:31.652158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.942 [2024-11-18 12:05:31.652326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.942 [2024-11-18 12:05:31.652364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:05.942 [2024-11-18 12:05:31.658978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.942 [2024-11-18 12:05:31.659092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.942 [2024-11-18 12:05:31.659131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:05.942 [2024-11-18 12:05:31.665826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.942 [2024-11-18 12:05:31.665980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.942 [2024-11-18 12:05:31.666019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:05.942 [2024-11-18 12:05:31.672774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.942 [2024-11-18 12:05:31.672913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.942 [2024-11-18 12:05:31.672953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:05.942 [2024-11-18 12:05:31.679729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.942 [2024-11-18 12:05:31.679892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.942 [2024-11-18 12:05:31.679932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:05.943 [2024-11-18 12:05:31.686706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.943 [2024-11-18 12:05:31.686837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.943 [2024-11-18 12:05:31.686876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:05.943 [2024-11-18 12:05:31.693212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.943 [2024-11-18 12:05:31.693326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.943 [2024-11-18 12:05:31.693365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:05.943 [2024-11-18 12:05:31.700149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.943 [2024-11-18 12:05:31.700325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.943 [2024-11-18 12:05:31.700364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:05.943 [2024-11-18 12:05:31.707458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.943 [2024-11-18 12:05:31.707610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.943 [2024-11-18 12:05:31.707650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:05.943 [2024-11-18 12:05:31.714433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.943 [2024-11-18 12:05:31.714590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.943 [2024-11-18 12:05:31.714629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:05.943 [2024-11-18 12:05:31.721211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.943 [2024-11-18 12:05:31.721393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.943 [2024-11-18 12:05:31.721432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:05.943 [2024-11-18 12:05:31.728201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.943 [2024-11-18 12:05:31.728365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.943 [2024-11-18 12:05:31.728405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:05.943 [2024-11-18 12:05:31.734982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.943 [2024-11-18 12:05:31.735204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.943 [2024-11-18 12:05:31.735246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:05.943 [2024-11-18 12:05:31.741876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.943 [2024-11-18 12:05:31.742025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.943 [2024-11-18 12:05:31.742064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:05.943 [2024-11-18 12:05:31.749131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.943 [2024-11-18 12:05:31.749230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.943 [2024-11-18 12:05:31.749269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:05.943 [2024-11-18 12:05:31.756383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.943 [2024-11-18 12:05:31.756595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.943 [2024-11-18 12:05:31.756634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:05.943 [2024-11-18 12:05:31.763928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.943 [2024-11-18 12:05:31.764121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.943 [2024-11-18 12:05:31.764160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:05.943 [2024-11-18 12:05:31.770703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.943 [2024-11-18 12:05:31.770899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.943 [2024-11-18 12:05:31.770937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:05.943 [2024-11-18 12:05:31.778505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.943 [2024-11-18 12:05:31.778746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.943 [2024-11-18 12:05:31.778786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:05.943 [2024-11-18 12:05:31.785609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.943 [2024-11-18 12:05:31.785748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.943 [2024-11-18 12:05:31.785787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:05.943 [2024-11-18 12:05:31.792051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.943 [2024-11-18 12:05:31.792174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.943 [2024-11-18 12:05:31.792213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:05.943 [2024-11-18 12:05:31.798548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.943 [2024-11-18 12:05:31.798673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.943 [2024-11-18 12:05:31.798712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:05.943 [2024-11-18 12:05:31.804946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.943 [2024-11-18 12:05:31.805068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.943 [2024-11-18 12:05:31.805106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:05.943 [2024-11-18 12:05:31.811804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.943 [2024-11-18 12:05:31.811934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.943 [2024-11-18 12:05:31.811974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:05.943 [2024-11-18 12:05:31.818623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.943 [2024-11-18 12:05:31.818744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.943 [2024-11-18 12:05:31.818798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:05.943 [2024-11-18 12:05:31.825321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.943 [2024-11-18 12:05:31.825449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.943 [2024-11-18 12:05:31.825488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.201 [2024-11-18 12:05:31.831812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.201 [2024-11-18 12:05:31.831944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.201 [2024-11-18 12:05:31.831983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.201 [2024-11-18 12:05:31.838106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.201 [2024-11-18 12:05:31.838241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.201 [2024-11-18 12:05:31.838279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.201 [2024-11-18 12:05:31.844548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.201 [2024-11-18 12:05:31.844684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.201 [2024-11-18 12:05:31.844725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.201 [2024-11-18 12:05:31.850827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.201 [2024-11-18 12:05:31.850956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.201 [2024-11-18 12:05:31.850997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.201 [2024-11-18 12:05:31.857409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.201 [2024-11-18 12:05:31.857540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.201 [2024-11-18 12:05:31.857578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.201 [2024-11-18 12:05:31.863993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.201 [2024-11-18 12:05:31.864116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.201 [2024-11-18 12:05:31.864155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.201 [2024-11-18 12:05:31.870450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.201 [2024-11-18 12:05:31.870583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.201 [2024-11-18 12:05:31.870624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.201 [2024-11-18 12:05:31.877234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.201 [2024-11-18 12:05:31.877357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.202 [2024-11-18 12:05:31.877398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.202 [2024-11-18 12:05:31.883690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.202 [2024-11-18 12:05:31.883815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.202 [2024-11-18 12:05:31.883855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.202 [2024-11-18 12:05:31.890177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.202 [2024-11-18 12:05:31.890300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.202 [2024-11-18 12:05:31.890340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.202 [2024-11-18 12:05:31.896624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.202 [2024-11-18 12:05:31.896745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.202 [2024-11-18 12:05:31.896797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.202 [2024-11-18 12:05:31.902994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.202 [2024-11-18 12:05:31.903124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.202 [2024-11-18 12:05:31.903165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.202 [2024-11-18 12:05:31.909396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.202 [2024-11-18 12:05:31.909528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.202 [2024-11-18 12:05:31.909568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.202 [2024-11-18 12:05:31.915914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.202 [2024-11-18 12:05:31.916037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.202 [2024-11-18 12:05:31.916077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.202 [2024-11-18 12:05:31.922268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.202 [2024-11-18 12:05:31.922407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.202 [2024-11-18 12:05:31.922448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.202 [2024-11-18 12:05:31.928565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.202 [2024-11-18 12:05:31.928705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.202 [2024-11-18 12:05:31.928745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.202 [2024-11-18 12:05:31.934963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.202 [2024-11-18 12:05:31.935095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.202 [2024-11-18 12:05:31.935136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.202 [2024-11-18 12:05:31.941409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.202 [2024-11-18 12:05:31.941544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.202 [2024-11-18 12:05:31.941585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.202 [2024-11-18 12:05:31.947739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.202 [2024-11-18 12:05:31.947878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.202 [2024-11-18 12:05:31.947918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.202 [2024-11-18 12:05:31.954237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.202 [2024-11-18 12:05:31.954360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.202 [2024-11-18 12:05:31.954400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.202 [2024-11-18 12:05:31.960723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.202 [2024-11-18 12:05:31.960856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.202 [2024-11-18 12:05:31.960894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.202 [2024-11-18 12:05:31.967072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.202 [2024-11-18 12:05:31.967201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.202 [2024-11-18 12:05:31.967242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.202 [2024-11-18 12:05:31.973642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.202 [2024-11-18 12:05:31.973766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.202 [2024-11-18 12:05:31.973807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.202 [2024-11-18 12:05:31.980078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.202 [2024-11-18 12:05:31.980197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.202 [2024-11-18 12:05:31.980235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.202 [2024-11-18 12:05:31.986424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.202 [2024-11-18 12:05:31.986561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.202 [2024-11-18 12:05:31.986600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.202 [2024-11-18 12:05:31.992834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.202 [2024-11-18 12:05:31.992963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.202 [2024-11-18 12:05:31.993003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.202 [2024-11-18 12:05:31.999337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.202 [2024-11-18 12:05:31.999456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.202 [2024-11-18 12:05:31.999503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.202 [2024-11-18 12:05:32.005892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.202 [2024-11-18 12:05:32.006013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.202 [2024-11-18 12:05:32.006061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.202 [2024-11-18 12:05:32.012433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.202 [2024-11-18 12:05:32.012578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.202 [2024-11-18 12:05:32.012617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.202 [2024-11-18 12:05:32.018822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.202 [2024-11-18 12:05:32.018945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.202 [2024-11-18 12:05:32.018986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.202 [2024-11-18 12:05:32.025065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.202 [2024-11-18 12:05:32.025214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.202 [2024-11-18 12:05:32.025253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.202 [2024-11-18 12:05:32.031592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.202 [2024-11-18 12:05:32.031719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.202 [2024-11-18 12:05:32.031758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.202 [2024-11-18 12:05:32.038384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.202 [2024-11-18 12:05:32.038585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.202 [2024-11-18 12:05:32.038632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.202 [2024-11-18 12:05:32.045295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.202 [2024-11-18 12:05:32.045521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.202 [2024-11-18 12:05:32.045569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.203 [2024-11-18 12:05:32.052844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.203 [2024-11-18 12:05:32.053021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.203 [2024-11-18 12:05:32.053060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.203 [2024-11-18 12:05:32.060230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.203 [2024-11-18 12:05:32.060435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.203 [2024-11-18 12:05:32.060474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.203 [2024-11-18 12:05:32.068194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.203 [2024-11-18 12:05:32.068345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.203 [2024-11-18 12:05:32.068384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.203 [2024-11-18 12:05:32.075668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.203 [2024-11-18 12:05:32.075791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.203 [2024-11-18 12:05:32.075829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.203 [2024-11-18 12:05:32.083403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.203 [2024-11-18 12:05:32.083554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.203 [2024-11-18 12:05:32.083594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.462 [2024-11-18 12:05:32.091273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.462 [2024-11-18 12:05:32.091528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.462 [2024-11-18 12:05:32.091569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.462 [2024-11-18 12:05:32.098158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.462 [2024-11-18 12:05:32.098406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.462 [2024-11-18 12:05:32.098450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.462 [2024-11-18 12:05:32.104137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.462 [2024-11-18 12:05:32.104291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.462 [2024-11-18 12:05:32.104330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.462 [2024-11-18 12:05:32.110295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.462 [2024-11-18 12:05:32.110466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.462 [2024-11-18 12:05:32.110516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.462 [2024-11-18 12:05:32.116539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.462 [2024-11-18 12:05:32.116693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.462 [2024-11-18 12:05:32.116732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.462 [2024-11-18 12:05:32.123834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.462 [2024-11-18 12:05:32.123955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.462 [2024-11-18 12:05:32.124001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.462 [2024-11-18 12:05:32.130783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.462 [2024-11-18 12:05:32.130923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.462 [2024-11-18 12:05:32.130962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.462 [2024-11-18 12:05:32.136981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.462 [2024-11-18 12:05:32.137104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.462 [2024-11-18 12:05:32.137142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.462 [2024-11-18 12:05:32.143156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.462 [2024-11-18 12:05:32.143283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.462 [2024-11-18 12:05:32.143322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.462 [2024-11-18 12:05:32.149420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.462 [2024-11-18 12:05:32.149550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.462 [2024-11-18 12:05:32.149590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.462 [2024-11-18 12:05:32.155554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.462 [2024-11-18 12:05:32.155679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.462 [2024-11-18 12:05:32.155717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.462 [2024-11-18 12:05:32.162288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.462 [2024-11-18 12:05:32.162415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.462 [2024-11-18 12:05:32.162453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.462 [2024-11-18 12:05:32.169516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.462 [2024-11-18 12:05:32.169716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.462 [2024-11-18 12:05:32.169756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.462 [2024-11-18 12:05:32.176729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.462 [2024-11-18 12:05:32.176907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.462 [2024-11-18 12:05:32.176946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.462 [2024-11-18 12:05:32.184125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.463 [2024-11-18 12:05:32.184305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.463 [2024-11-18 12:05:32.184344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.463 [2024-11-18 12:05:32.191261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.463 [2024-11-18 12:05:32.191447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.463 [2024-11-18 12:05:32.191486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.463 [2024-11-18 12:05:32.198473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.463 [2024-11-18 12:05:32.198660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.463 [2024-11-18 12:05:32.198699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.463 [2024-11-18 12:05:32.205667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.463 [2024-11-18 12:05:32.205786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.463 [2024-11-18 12:05:32.205825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.463 [2024-11-18 12:05:32.212906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.463 [2024-11-18 12:05:32.213086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.463 [2024-11-18 12:05:32.213126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.463 [2024-11-18 12:05:32.219814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.463 [2024-11-18 12:05:32.219968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.463 [2024-11-18 12:05:32.220007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.463 [2024-11-18 12:05:32.226870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.463 [2024-11-18 12:05:32.227074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.463 [2024-11-18 12:05:32.227113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.463 [2024-11-18 12:05:32.234533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.463 [2024-11-18 12:05:32.234689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.463 [2024-11-18 12:05:32.234728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.463 [2024-11-18 12:05:32.242205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.463 [2024-11-18 12:05:32.242348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.463 [2024-11-18 12:05:32.242396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.463 [2024-11-18 12:05:32.249067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.463 [2024-11-18 12:05:32.249318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.463 [2024-11-18 12:05:32.249374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.463 [2024-11-18 12:05:32.255189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.463 [2024-11-18 12:05:32.255313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.463 [2024-11-18 12:05:32.255351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.463 [2024-11-18 12:05:32.261470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.463 [2024-11-18 12:05:32.261624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.463 [2024-11-18 12:05:32.261663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.463 [2024-11-18 12:05:32.267884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.463 [2024-11-18 12:05:32.269250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.463 [2024-11-18 12:05:32.269292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.463 4504.00 IOPS, 563.00 MiB/s [2024-11-18T11:05:32.348Z] [2024-11-18 12:05:32.274918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.463 [2024-11-18 12:05:32.275097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.463 [2024-11-18 12:05:32.275135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.463 [2024-11-18 12:05:32.281829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.463 [2024-11-18 12:05:32.282015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.463 [2024-11-18 12:05:32.282056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.463 [2024-11-18 12:05:32.289250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.463 [2024-11-18 12:05:32.289351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.463 [2024-11-18 12:05:32.289389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.463 [2024-11-18 12:05:32.296262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.463 [2024-11-18 12:05:32.296362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.463 [2024-11-18 12:05:32.296400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.463 [2024-11-18 12:05:32.302448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.463 [2024-11-18 12:05:32.302563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.463 [2024-11-18 12:05:32.302603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.463 [2024-11-18 12:05:32.308499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.463 [2024-11-18 12:05:32.308607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.463 [2024-11-18 12:05:32.308647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.463 [2024-11-18 12:05:32.314625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.463 [2024-11-18 12:05:32.314727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.463 [2024-11-18 12:05:32.314768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.463 [2024-11-18 12:05:32.320874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.463 [2024-11-18 12:05:32.320988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.463 [2024-11-18 12:05:32.321026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.463 [2024-11-18 12:05:32.326849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.463 [2024-11-18 12:05:32.326954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.463 [2024-11-18 12:05:32.326992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.463 [2024-11-18 12:05:32.333528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.463 [2024-11-18 12:05:32.333660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.463 [2024-11-18 12:05:32.333697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.463 [2024-11-18 12:05:32.339645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.463 [2024-11-18 12:05:32.339746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.463 [2024-11-18 12:05:32.339783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.463 [2024-11-18 12:05:32.345901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.463 [2024-11-18 12:05:32.346003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.463 [2024-11-18 12:05:32.346040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.723 [2024-11-18 12:05:32.351721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.723 [2024-11-18 12:05:32.351824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.723 [2024-11-18 12:05:32.351863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.723 [2024-11-18 12:05:32.357583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.723 [2024-11-18 12:05:32.357693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.723 [2024-11-18 12:05:32.357731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.723 [2024-11-18 12:05:32.363445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.723 [2024-11-18 12:05:32.363569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.723 [2024-11-18 12:05:32.363608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.723 [2024-11-18 12:05:32.369267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.723 [2024-11-18 12:05:32.369377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.723 [2024-11-18 12:05:32.369416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.723 [2024-11-18 12:05:32.374967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.723 [2024-11-18 12:05:32.375065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.723 [2024-11-18 12:05:32.375103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.723 [2024-11-18 12:05:32.380616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.723 [2024-11-18 12:05:32.380726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.723 [2024-11-18 12:05:32.380767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.723 [2024-11-18 12:05:32.386402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.723 [2024-11-18 12:05:32.386512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.723 [2024-11-18 12:05:32.386556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.723 [2024-11-18 12:05:32.392176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.723 [2024-11-18 12:05:32.392276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.723 [2024-11-18 12:05:32.392314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.723 [2024-11-18 12:05:32.398021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.723 [2024-11-18 12:05:32.398118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.723 [2024-11-18 12:05:32.398155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.723 [2024-11-18 12:05:32.403931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.723 [2024-11-18 12:05:32.404050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.723 [2024-11-18 12:05:32.404090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.723 [2024-11-18 12:05:32.409946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.724 [2024-11-18 12:05:32.410049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.724 [2024-11-18 12:05:32.410087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.724 [2024-11-18 12:05:32.415647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.724 [2024-11-18 12:05:32.415746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.724 [2024-11-18 12:05:32.415785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.724 [2024-11-18 12:05:32.421454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.724 [2024-11-18 12:05:32.421574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.724 [2024-11-18 12:05:32.421622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.724 [2024-11-18 12:05:32.427308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.724 [2024-11-18 12:05:32.427407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.724 [2024-11-18 12:05:32.427445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.724 [2024-11-18 12:05:32.433126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.724 [2024-11-18 12:05:32.433243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.724 [2024-11-18 12:05:32.433281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.724 [2024-11-18 12:05:32.438808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.724 [2024-11-18 12:05:32.438909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.724 [2024-11-18 12:05:32.438947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.724 [2024-11-18 12:05:32.444927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.724 [2024-11-18 12:05:32.445042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.724 [2024-11-18 12:05:32.445081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.724 [2024-11-18 12:05:32.451107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.724 [2024-11-18 12:05:32.451204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.724 [2024-11-18 12:05:32.451241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.724 [2024-11-18 12:05:32.457953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.724 [2024-11-18 12:05:32.458180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.724 [2024-11-18 12:05:32.458221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.724 [2024-11-18 12:05:32.464156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.724 [2024-11-18 12:05:32.464328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.724 [2024-11-18 12:05:32.464372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.724 [2024-11-18 12:05:32.470466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.724 [2024-11-18 12:05:32.470593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.724 [2024-11-18 12:05:32.470632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.724 [2024-11-18 12:05:32.476710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.724 [2024-11-18 12:05:32.476853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.724 [2024-11-18 12:05:32.476896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.724 [2024-11-18 12:05:32.483214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.724 [2024-11-18 12:05:32.483389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.724 [2024-11-18 12:05:32.483432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.724 [2024-11-18 12:05:32.489599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.724 [2024-11-18 12:05:32.489770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.724 [2024-11-18 12:05:32.489810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.724 [2024-11-18 12:05:32.496138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.724 [2024-11-18 12:05:32.496334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.724 [2024-11-18 12:05:32.496378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.724 [2024-11-18 12:05:32.502375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.724 [2024-11-18 12:05:32.502544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.724 [2024-11-18 12:05:32.502584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.724 [2024-11-18 12:05:32.508508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.724 [2024-11-18 12:05:32.508672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.724 [2024-11-18 12:05:32.508712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.724 [2024-11-18 12:05:32.515266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.724 [2024-11-18 12:05:32.515473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.724 [2024-11-18 12:05:32.515525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.724 [2024-11-18 12:05:32.521577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.724 [2024-11-18 12:05:32.521724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.724 [2024-11-18 12:05:32.521763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.724 [2024-11-18 12:05:32.527836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.724 [2024-11-18 12:05:32.527959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.724 [2024-11-18 12:05:32.528003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.724 [2024-11-18 12:05:32.533931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.724 [2024-11-18 12:05:32.534097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.724 [2024-11-18 12:05:32.534141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.724 [2024-11-18 12:05:32.540462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.724 [2024-11-18 12:05:32.540719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.724 [2024-11-18 12:05:32.540758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.724 [2024-11-18 12:05:32.546926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.724 [2024-11-18 12:05:32.547072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.724 [2024-11-18 12:05:32.547115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.724 [2024-11-18 12:05:32.553251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.724 [2024-11-18 12:05:32.553447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.724 [2024-11-18 12:05:32.553501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.724 [2024-11-18 12:05:32.559403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.724 [2024-11-18 12:05:32.559532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.724 [2024-11-18 12:05:32.559571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.724 [2024-11-18 12:05:32.565900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.724 [2024-11-18 12:05:32.566097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.724 [2024-11-18 12:05:32.566140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.724 [2024-11-18 12:05:32.572252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.724 [2024-11-18 12:05:32.572399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.725 [2024-11-18 12:05:32.572442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.725 [2024-11-18 12:05:32.578552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.725 [2024-11-18 12:05:32.578732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.725 [2024-11-18 12:05:32.578771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.725 [2024-11-18 12:05:32.584786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.725 [2024-11-18 12:05:32.585037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.725 [2024-11-18 12:05:32.585081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.725 [2024-11-18 12:05:32.591234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.725 [2024-11-18 12:05:32.591442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.725 [2024-11-18 12:05:32.591483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.725 [2024-11-18 12:05:32.597653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.725 [2024-11-18 12:05:32.597790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.725 [2024-11-18 12:05:32.597846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.725 [2024-11-18 12:05:32.604005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.725 [2024-11-18 12:05:32.604202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.725 [2024-11-18 12:05:32.604243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.983 [2024-11-18 12:05:32.610571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.983 [2024-11-18 12:05:32.610780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.984 [2024-11-18 12:05:32.610839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.984 [2024-11-18 12:05:32.616963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.984 [2024-11-18 12:05:32.617211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.984 [2024-11-18 12:05:32.617265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.984 [2024-11-18 12:05:32.623122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.984 [2024-11-18 12:05:32.623296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.984 [2024-11-18 12:05:32.623338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.984 [2024-11-18 12:05:32.629595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.984 [2024-11-18 12:05:32.629748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.984 [2024-11-18 12:05:32.629786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.984 [2024-11-18 12:05:32.636058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.984 [2024-11-18 12:05:32.636282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.984 [2024-11-18 12:05:32.636325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.984 [2024-11-18 12:05:32.642584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.984 [2024-11-18 12:05:32.642800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.984 [2024-11-18 12:05:32.642859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.984 [2024-11-18 12:05:32.649040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.984 [2024-11-18 12:05:32.649257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.984 [2024-11-18 12:05:32.649318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.984 [2024-11-18 12:05:32.655360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.984 [2024-11-18 12:05:32.655554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.984 [2024-11-18 12:05:32.655592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.984 [2024-11-18 12:05:32.661733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.984 [2024-11-18 12:05:32.661906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.984 [2024-11-18 12:05:32.661947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.984 [2024-11-18 12:05:32.668084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.984 [2024-11-18 12:05:32.668255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.984 [2024-11-18 12:05:32.668296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.984 [2024-11-18 12:05:32.675308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.984 [2024-11-18 12:05:32.675488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.984 [2024-11-18 12:05:32.675541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.984 [2024-11-18 12:05:32.682155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.984 [2024-11-18 12:05:32.682263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.984 [2024-11-18 12:05:32.682304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.984 [2024-11-18 12:05:32.689093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.984 [2024-11-18 12:05:32.689250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.984 [2024-11-18 12:05:32.689293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.984 [2024-11-18 12:05:32.696291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.984 [2024-11-18 12:05:32.696420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.984 [2024-11-18 12:05:32.696461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.984 [2024-11-18 12:05:32.703413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.984 [2024-11-18 12:05:32.703573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.984 [2024-11-18 12:05:32.703613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.984 [2024-11-18 12:05:32.710621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.984 [2024-11-18 12:05:32.710779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.984 [2024-11-18 12:05:32.710836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.984 [2024-11-18 12:05:32.717575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.984 [2024-11-18 12:05:32.717814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.984 [2024-11-18 12:05:32.717858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.984 [2024-11-18 12:05:32.724459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.984 [2024-11-18 12:05:32.724905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.984 [2024-11-18 12:05:32.724948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.984 [2024-11-18 12:05:32.731826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.984 [2024-11-18 12:05:32.732059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.984 [2024-11-18 12:05:32.732112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.984 [2024-11-18 12:05:32.738924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.984 [2024-11-18 12:05:32.739093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.984 [2024-11-18 12:05:32.739134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.984 [2024-11-18 12:05:32.745902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.984 [2024-11-18 12:05:32.746020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.984 [2024-11-18 12:05:32.746061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.984 [2024-11-18 12:05:32.753228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.984 [2024-11-18 12:05:32.753430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.984 [2024-11-18 12:05:32.753471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.984 [2024-11-18 12:05:32.760512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.984 [2024-11-18 12:05:32.760701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.984 [2024-11-18 12:05:32.760738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.984 [2024-11-18 12:05:32.767927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.984 [2024-11-18 12:05:32.768075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.984 [2024-11-18 12:05:32.768116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.984 [2024-11-18 12:05:32.775058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.984 [2024-11-18 12:05:32.775217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.984 [2024-11-18 12:05:32.775259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.984 [2024-11-18 12:05:32.782232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.984 [2024-11-18 12:05:32.782381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.984 [2024-11-18 12:05:32.782422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.984 [2024-11-18 12:05:32.789247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.984 [2024-11-18 12:05:32.789383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.984 [2024-11-18 12:05:32.789425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.985 [2024-11-18 12:05:32.796920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.985 [2024-11-18 12:05:32.797067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.985 [2024-11-18 12:05:32.797108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.985 [2024-11-18 12:05:32.804032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.985 [2024-11-18 12:05:32.804183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.985 [2024-11-18 12:05:32.804224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.985 [2024-11-18 12:05:32.811274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.985 [2024-11-18 12:05:32.811384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.985 [2024-11-18 12:05:32.811425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.985 [2024-11-18 12:05:32.818650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.985 [2024-11-18 12:05:32.818831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.985 [2024-11-18 12:05:32.818873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.985 [2024-11-18 12:05:32.826066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.985 [2024-11-18 12:05:32.826250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.985 [2024-11-18 12:05:32.826292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.985 [2024-11-18 12:05:32.833185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.985 [2024-11-18 12:05:32.833389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.985 [2024-11-18 12:05:32.833431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:06.985 [2024-11-18 12:05:32.840701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.985 [2024-11-18 12:05:32.840851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.985 [2024-11-18 12:05:32.840892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:06.985 [2024-11-18 12:05:32.848038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.985 [2024-11-18 12:05:32.848240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.985 [2024-11-18 12:05:32.848282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:06.985 [2024-11-18 12:05:32.855735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.985 [2024-11-18 12:05:32.855924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.985 [2024-11-18 12:05:32.855975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:06.985 [2024-11-18 12:05:32.862677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.985 [2024-11-18 12:05:32.862838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.985 [2024-11-18 12:05:32.862879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:07.244 [2024-11-18 12:05:32.869865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.244 [2024-11-18 12:05:32.870004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.244 [2024-11-18 12:05:32.870042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:07.244 [2024-11-18 12:05:32.877190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.244 [2024-11-18 12:05:32.877363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.244 [2024-11-18 12:05:32.877405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:07.244 [2024-11-18 12:05:32.884323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.244 [2024-11-18 12:05:32.884531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.244 [2024-11-18 12:05:32.884586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:07.244 [2024-11-18 12:05:32.891391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.244 [2024-11-18 12:05:32.891587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.244 [2024-11-18 12:05:32.891625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:07.244 [2024-11-18 12:05:32.898666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.244 [2024-11-18 12:05:32.898842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.244 [2024-11-18 12:05:32.898883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:07.244 [2024-11-18 12:05:32.905866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.244 [2024-11-18 12:05:32.906088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.244 [2024-11-18 12:05:32.906132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:07.244 [2024-11-18 12:05:32.912847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.244 [2024-11-18 12:05:32.913027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.244 [2024-11-18 12:05:32.913069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:07.244 [2024-11-18 12:05:32.919815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.244 [2024-11-18 12:05:32.920035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.244 [2024-11-18 12:05:32.920076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:07.244 [2024-11-18 12:05:32.926959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.244 [2024-11-18 12:05:32.927151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.244 [2024-11-18 12:05:32.927193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:07.244 [2024-11-18 12:05:32.934188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.244 [2024-11-18 12:05:32.934417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.244 [2024-11-18 12:05:32.934462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:07.244 [2024-11-18 12:05:32.941405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.244 [2024-11-18 12:05:32.941572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.244 [2024-11-18 12:05:32.941609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:07.244 [2024-11-18 12:05:32.948507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.244 [2024-11-18 12:05:32.948740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.244 [2024-11-18 12:05:32.948799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:07.244 [2024-11-18 12:05:32.955682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.244 [2024-11-18 12:05:32.955917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.244 [2024-11-18 12:05:32.955961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:07.244 [2024-11-18 12:05:32.963082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.244 [2024-11-18 12:05:32.963264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.244 [2024-11-18 12:05:32.963305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:07.244 [2024-11-18 12:05:32.970099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.244 [2024-11-18 12:05:32.970289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.244 [2024-11-18 12:05:32.970331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:07.244 [2024-11-18 12:05:32.977308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.244 [2024-11-18 12:05:32.977415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.244 [2024-11-18 12:05:32.977464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:07.244 [2024-11-18 12:05:32.984618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.244 [2024-11-18 12:05:32.984838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.244 [2024-11-18 12:05:32.984881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:07.244 [2024-11-18 12:05:32.991954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.244 [2024-11-18 12:05:32.992199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.244 [2024-11-18 12:05:32.992243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:07.244 [2024-11-18 12:05:32.999089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.244 [2024-11-18 12:05:32.999204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.244 [2024-11-18 12:05:32.999245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:07.244 [2024-11-18 12:05:33.006155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.244 [2024-11-18 12:05:33.006378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.244 [2024-11-18 12:05:33.006422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:07.244 [2024-11-18 12:05:33.013370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.244 [2024-11-18 12:05:33.013536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.244 [2024-11-18 12:05:33.013593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:07.244 [2024-11-18 12:05:33.020582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.244 [2024-11-18 12:05:33.020796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.244 [2024-11-18 12:05:33.020855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:07.244 [2024-11-18 12:05:33.027810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.244 [2024-11-18 12:05:33.027991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.245 [2024-11-18 12:05:33.028035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:07.245 [2024-11-18 12:05:33.035206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.245 [2024-11-18 12:05:33.035340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.245 [2024-11-18 12:05:33.035383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:07.245 [2024-11-18 12:05:33.042483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.245 [2024-11-18 12:05:33.042751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.245 [2024-11-18 12:05:33.042809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:07.245 [2024-11-18 12:05:33.050001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.245 [2024-11-18 12:05:33.050219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.245 [2024-11-18 12:05:33.050262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:07.245 [2024-11-18 12:05:33.057038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.245 [2024-11-18 12:05:33.057192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.245 [2024-11-18 12:05:33.057236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:07.245 [2024-11-18 12:05:33.063731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.245 [2024-11-18 12:05:33.063836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.245 [2024-11-18 12:05:33.063878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:07.245 [2024-11-18 12:05:33.070739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.245 [2024-11-18 12:05:33.070973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.245 [2024-11-18 12:05:33.071015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:07.245 [2024-11-18 12:05:33.077670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.245 [2024-11-18 12:05:33.077881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.245 [2024-11-18 12:05:33.077924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:07.245 [2024-11-18 12:05:33.084888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.245 [2024-11-18 12:05:33.085081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.245 [2024-11-18 12:05:33.085124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:07.245 [2024-11-18 12:05:33.092208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.245 [2024-11-18 12:05:33.092337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.245 [2024-11-18 12:05:33.092378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:07.245 [2024-11-18 12:05:33.099204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.245 [2024-11-18 12:05:33.099359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.245 [2024-11-18 12:05:33.099402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:07.245 [2024-11-18 12:05:33.106221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.245 [2024-11-18 12:05:33.106417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.245 [2024-11-18 12:05:33.106476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:07.245 [2024-11-18 12:05:33.113247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.245 [2024-11-18 12:05:33.113434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.245 [2024-11-18 12:05:33.113477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:07.245 [2024-11-18 12:05:33.120438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.245 [2024-11-18 12:05:33.120599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.245 [2024-11-18 12:05:33.120635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:07.245 [2024-11-18 12:05:33.127658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.245 [2024-11-18 12:05:33.127867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.245 [2024-11-18 12:05:33.127907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:07.504 [2024-11-18 12:05:33.135217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.504 [2024-11-18 12:05:33.135368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.504 [2024-11-18 12:05:33.135412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:07.504 [2024-11-18 12:05:33.142556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.504 [2024-11-18 12:05:33.142767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.504 [2024-11-18 12:05:33.142824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:07.504 [2024-11-18 12:05:33.149695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.504 [2024-11-18 12:05:33.149835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.504 [2024-11-18 12:05:33.149876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:07.504 [2024-11-18 12:05:33.157154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.504 [2024-11-18 12:05:33.157279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.504 [2024-11-18 12:05:33.157322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:07.504 [2024-11-18 12:05:33.164562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.504 [2024-11-18 12:05:33.164700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.504 [2024-11-18 12:05:33.164737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:07.504 [2024-11-18 12:05:33.171642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.504 [2024-11-18 12:05:33.171859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.504 [2024-11-18 12:05:33.171900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:07.504 [2024-11-18 12:05:33.178629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.504 [2024-11-18 12:05:33.178891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.504 [2024-11-18 12:05:33.178934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:07.504 [2024-11-18 12:05:33.185882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.504 [2024-11-18 12:05:33.186040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.504 [2024-11-18 12:05:33.186082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:07.504 [2024-11-18 12:05:33.193161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.504 [2024-11-18 12:05:33.193275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.504 [2024-11-18 12:05:33.193316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:07.504 [2024-11-18 12:05:33.200112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.504 [2024-11-18 12:05:33.200355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.504 [2024-11-18 12:05:33.200399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:07.504 [2024-11-18 12:05:33.207228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.504 [2024-11-18 12:05:33.207395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.504 [2024-11-18 12:05:33.207437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:07.504 [2024-11-18 12:05:33.214677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.504 [2024-11-18 12:05:33.214820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.504 [2024-11-18 12:05:33.214862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:07.504 [2024-11-18 12:05:33.221822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.504 [2024-11-18 12:05:33.222027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.504 [2024-11-18 12:05:33.222068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:07.504 [2024-11-18 12:05:33.228974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.504 [2024-11-18 12:05:33.229136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.504 [2024-11-18 12:05:33.229177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:07.504 [2024-11-18 12:05:33.235708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.504 [2024-11-18 12:05:33.235826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.505 [2024-11-18 12:05:33.235869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:07.505 [2024-11-18 12:05:33.242666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.505 [2024-11-18 12:05:33.242871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.505 [2024-11-18 12:05:33.242913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:07.505 [2024-11-18 12:05:33.249815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.505 [2024-11-18 12:05:33.249953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.505 [2024-11-18 12:05:33.249994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:07.505 [2024-11-18 12:05:33.256857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.505 [2024-11-18 12:05:33.256997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.505 [2024-11-18 12:05:33.257039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:07.505 [2024-11-18 12:05:33.264095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.505 [2024-11-18 12:05:33.264306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.505 [2024-11-18 12:05:33.264347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:07.505 4544.50 IOPS, 568.06 MiB/s [2024-11-18T11:05:33.390Z] [2024-11-18 12:05:33.272393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.505 [2024-11-18 12:05:33.272593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.505 [2024-11-18 12:05:33.272635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:07.505 00:37:07.505 Latency(us) 00:37:07.505 [2024-11-18T11:05:33.390Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:07.505 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:07.505 nvme0n1 : 2.00 4542.18 567.77 0.00 0.00 3512.31 2451.53 11213.94 00:37:07.505 [2024-11-18T11:05:33.390Z] =================================================================================================================== 00:37:07.505 [2024-11-18T11:05:33.390Z] Total : 4542.18 567.77 0.00 0.00 3512.31 2451.53 11213.94 00:37:07.505 { 00:37:07.505 "results": [ 00:37:07.505 { 00:37:07.505 "job": "nvme0n1", 00:37:07.505 "core_mask": "0x2", 00:37:07.505 "workload": "randwrite", 00:37:07.505 "status": "finished", 00:37:07.505 "queue_depth": 16, 00:37:07.505 "io_size": 131072, 00:37:07.505 "runtime": 2.004546, 00:37:07.505 "iops": 4542.17563478214, 00:37:07.505 "mibps": 567.7719543477675, 00:37:07.505 "io_failed": 0, 00:37:07.505 "io_timeout": 0, 00:37:07.505 "avg_latency_us": 3512.314793255639, 00:37:07.505 "min_latency_us": 2451.531851851852, 00:37:07.505 "max_latency_us": 11213.937777777777 00:37:07.505 } 00:37:07.505 ], 00:37:07.505 "core_count": 1 00:37:07.505 } 00:37:07.505 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:07.505 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:07.505 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:07.505 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:07.505 | .driver_specific 00:37:07.505 | .nvme_error 00:37:07.505 | .status_code 00:37:07.505 | .command_transient_transport_error' 00:37:07.763 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 294 > 0 )) 00:37:07.763 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3134289 00:37:07.763 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3134289 ']' 00:37:07.763 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3134289 00:37:07.763 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:07.763 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:07.763 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3134289 00:37:07.763 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:07.763 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:07.763 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3134289' 00:37:07.763 killing process with pid 3134289 00:37:07.763 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3134289 00:37:07.763 Received shutdown signal, test time was about 2.000000 seconds 00:37:07.763 00:37:07.763 Latency(us) 00:37:07.763 [2024-11-18T11:05:33.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:07.763 [2024-11-18T11:05:33.648Z] =================================================================================================================== 00:37:07.763 [2024-11-18T11:05:33.648Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:07.763 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3134289 00:37:08.698 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3132321 00:37:08.698 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3132321 ']' 00:37:08.698 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3132321 00:37:08.698 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:08.698 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:08.698 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3132321 00:37:08.698 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:08.698 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:08.698 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3132321' 00:37:08.698 killing process with pid 3132321 00:37:08.698 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3132321 00:37:08.698 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3132321 00:37:10.073 00:37:10.073 real 0m23.214s 00:37:10.073 user 0m45.559s 00:37:10.073 sys 0m4.680s 00:37:10.073 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:10.073 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:10.073 ************************************ 00:37:10.073 END TEST nvmf_digest_error 00:37:10.073 ************************************ 00:37:10.073 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:37:10.073 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:37:10.073 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:10.073 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:37:10.073 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:10.073 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:37:10.073 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:10.073 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:10.073 rmmod nvme_tcp 00:37:10.073 rmmod nvme_fabrics 00:37:10.073 rmmod nvme_keyring 00:37:10.073 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:10.073 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:37:10.073 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:37:10.073 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3132321 ']' 00:37:10.073 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3132321 00:37:10.073 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 3132321 ']' 00:37:10.073 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 3132321 00:37:10.073 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3132321) - No such process 00:37:10.073 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 3132321 is not found' 00:37:10.073 Process with pid 3132321 is not found 00:37:10.073 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:10.073 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:10.073 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:10.073 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:37:10.073 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:37:10.073 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:10.073 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:37:10.073 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:10.073 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:10.073 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:10.073 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:10.073 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:12.606 12:05:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:12.606 00:37:12.606 real 0m52.496s 00:37:12.606 user 1m34.952s 00:37:12.606 sys 0m10.857s 00:37:12.606 12:05:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:12.606 12:05:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:12.606 ************************************ 00:37:12.606 END TEST nvmf_digest 00:37:12.606 ************************************ 00:37:12.606 12:05:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:37:12.606 12:05:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:37:12.606 12:05:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:37:12.606 12:05:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:12.606 12:05:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:12.606 12:05:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:12.606 12:05:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.606 ************************************ 00:37:12.606 START TEST nvmf_bdevperf 00:37:12.606 ************************************ 00:37:12.606 12:05:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:12.606 * Looking for test storage... 00:37:12.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:12.606 12:05:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:12.606 12:05:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:37:12.606 12:05:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:12.606 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:12.606 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:12.606 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:12.606 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:12.606 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:37:12.606 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:37:12.606 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:37:12.606 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:37:12.606 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:37:12.606 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:37:12.606 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:37:12.606 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:12.606 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:37:12.606 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:37:12.606 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:12.606 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:12.606 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:37:12.606 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:37:12.606 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:12.606 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:37:12.606 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:37:12.606 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:37:12.606 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:37:12.606 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:12.606 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:37:12.606 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:37:12.606 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:12.606 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:12.606 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:37:12.606 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:12.606 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:12.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:12.606 --rc genhtml_branch_coverage=1 00:37:12.606 --rc genhtml_function_coverage=1 00:37:12.606 --rc genhtml_legend=1 00:37:12.606 --rc geninfo_all_blocks=1 00:37:12.606 --rc geninfo_unexecuted_blocks=1 00:37:12.606 00:37:12.606 ' 00:37:12.606 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:12.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:12.606 --rc genhtml_branch_coverage=1 00:37:12.606 --rc genhtml_function_coverage=1 00:37:12.606 --rc genhtml_legend=1 00:37:12.606 --rc geninfo_all_blocks=1 00:37:12.606 --rc geninfo_unexecuted_blocks=1 00:37:12.606 00:37:12.606 ' 00:37:12.606 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:12.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:12.606 --rc genhtml_branch_coverage=1 00:37:12.606 --rc genhtml_function_coverage=1 00:37:12.606 --rc genhtml_legend=1 00:37:12.606 --rc geninfo_all_blocks=1 00:37:12.606 --rc geninfo_unexecuted_blocks=1 00:37:12.606 00:37:12.606 ' 00:37:12.606 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:12.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:12.607 --rc genhtml_branch_coverage=1 00:37:12.607 --rc genhtml_function_coverage=1 00:37:12.607 --rc genhtml_legend=1 00:37:12.607 --rc geninfo_all_blocks=1 00:37:12.607 --rc geninfo_unexecuted_blocks=1 00:37:12.607 00:37:12.607 ' 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:12.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:37:12.607 12:05:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:14.508 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:14.508 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:14.508 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:14.509 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:14.509 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:14.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:14.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:37:14.509 00:37:14.509 --- 10.0.0.2 ping statistics --- 00:37:14.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:14.509 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:14.509 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:14.509 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:37:14.509 00:37:14.509 --- 10.0.0.1 ping statistics --- 00:37:14.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:14.509 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3137020 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3137020 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3137020 ']' 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:14.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:14.509 12:05:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:14.767 [2024-11-18 12:05:40.404624] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:37:14.767 [2024-11-18 12:05:40.404757] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:14.767 [2024-11-18 12:05:40.554972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:15.026 [2024-11-18 12:05:40.680656] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:15.026 [2024-11-18 12:05:40.680731] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:15.026 [2024-11-18 12:05:40.680753] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:15.026 [2024-11-18 12:05:40.680775] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:15.026 [2024-11-18 12:05:40.680793] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:15.026 [2024-11-18 12:05:40.684525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:15.026 [2024-11-18 12:05:40.684661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:15.026 [2024-11-18 12:05:40.684662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:15.593 12:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:15.593 12:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:37:15.593 12:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:15.593 12:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:15.593 12:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:15.593 12:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:15.593 12:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:15.593 12:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.593 12:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:15.593 [2024-11-18 12:05:41.396866] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:15.593 12:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.593 12:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:15.593 12:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.593 12:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:15.851 Malloc0 00:37:15.851 12:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.851 12:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:15.851 12:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.851 12:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:15.851 12:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.851 12:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:15.851 12:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.851 12:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:15.851 12:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.851 12:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:15.851 12:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.851 12:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:15.851 [2024-11-18 12:05:41.508188] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:15.851 12:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.851 12:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:37:15.851 12:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:37:15.851 12:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:37:15.852 12:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:37:15.852 12:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:15.852 12:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:15.852 { 00:37:15.852 "params": { 00:37:15.852 "name": "Nvme$subsystem", 00:37:15.852 "trtype": "$TEST_TRANSPORT", 00:37:15.852 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:15.852 "adrfam": "ipv4", 00:37:15.852 "trsvcid": "$NVMF_PORT", 00:37:15.852 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:15.852 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:15.852 "hdgst": ${hdgst:-false}, 00:37:15.852 "ddgst": ${ddgst:-false} 00:37:15.852 }, 00:37:15.852 "method": "bdev_nvme_attach_controller" 00:37:15.852 } 00:37:15.852 EOF 00:37:15.852 )") 00:37:15.852 12:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:37:15.852 12:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:37:15.852 12:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:37:15.852 12:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:15.852 "params": { 00:37:15.852 "name": "Nvme1", 00:37:15.852 "trtype": "tcp", 00:37:15.852 "traddr": "10.0.0.2", 00:37:15.852 "adrfam": "ipv4", 00:37:15.852 "trsvcid": "4420", 00:37:15.852 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:15.852 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:15.852 "hdgst": false, 00:37:15.852 "ddgst": false 00:37:15.852 }, 00:37:15.852 "method": "bdev_nvme_attach_controller" 00:37:15.852 }' 00:37:15.852 [2024-11-18 12:05:41.594443] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:37:15.852 [2024-11-18 12:05:41.594610] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3137181 ] 00:37:15.852 [2024-11-18 12:05:41.731393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:16.110 [2024-11-18 12:05:41.860371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:16.682 Running I/O for 1 seconds... 00:37:17.698 6075.00 IOPS, 23.73 MiB/s 00:37:17.698 Latency(us) 00:37:17.698 [2024-11-18T11:05:43.583Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:17.698 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:17.698 Verification LBA range: start 0x0 length 0x4000 00:37:17.698 Nvme1n1 : 1.01 6124.00 23.92 0.00 0.00 20775.62 3859.34 17767.54 00:37:17.698 [2024-11-18T11:05:43.583Z] =================================================================================================================== 00:37:17.698 [2024-11-18T11:05:43.583Z] Total : 6124.00 23.92 0.00 0.00 20775.62 3859.34 17767.54 00:37:18.633 12:05:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3137454 00:37:18.633 12:05:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:37:18.633 12:05:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:37:18.633 12:05:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:37:18.633 12:05:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:37:18.633 12:05:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:37:18.633 12:05:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:18.633 12:05:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:18.633 { 00:37:18.633 "params": { 00:37:18.633 "name": "Nvme$subsystem", 00:37:18.633 "trtype": "$TEST_TRANSPORT", 00:37:18.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:18.633 "adrfam": "ipv4", 00:37:18.633 "trsvcid": "$NVMF_PORT", 00:37:18.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:18.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:18.633 "hdgst": ${hdgst:-false}, 00:37:18.633 "ddgst": ${ddgst:-false} 00:37:18.633 }, 00:37:18.633 "method": "bdev_nvme_attach_controller" 00:37:18.633 } 00:37:18.633 EOF 00:37:18.633 )") 00:37:18.633 12:05:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:37:18.633 12:05:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:37:18.633 12:05:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:37:18.633 12:05:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:18.633 "params": { 00:37:18.633 "name": "Nvme1", 00:37:18.633 "trtype": "tcp", 00:37:18.633 "traddr": "10.0.0.2", 00:37:18.633 "adrfam": "ipv4", 00:37:18.633 "trsvcid": "4420", 00:37:18.633 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:18.633 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:18.633 "hdgst": false, 00:37:18.633 "ddgst": false 00:37:18.633 }, 00:37:18.633 "method": "bdev_nvme_attach_controller" 00:37:18.633 }' 00:37:18.633 [2024-11-18 12:05:44.347751] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:37:18.633 [2024-11-18 12:05:44.347905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3137454 ] 00:37:18.633 [2024-11-18 12:05:44.484962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:18.891 [2024-11-18 12:05:44.612670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:19.458 Running I/O for 15 seconds... 00:37:21.326 6144.00 IOPS, 24.00 MiB/s [2024-11-18T11:05:47.472Z] 6179.50 IOPS, 24.14 MiB/s [2024-11-18T11:05:47.472Z] 12:05:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3137020 00:37:21.587 12:05:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:37:21.587 [2024-11-18 12:05:47.289017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:108096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.587 [2024-11-18 12:05:47.289085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.587 [2024-11-18 12:05:47.289144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:108104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.587 [2024-11-18 12:05:47.289171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.587 [2024-11-18 12:05:47.289203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:108112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.587 [2024-11-18 12:05:47.289229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.587 [2024-11-18 12:05:47.289257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:108120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.587 [2024-11-18 12:05:47.289282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.587 [2024-11-18 12:05:47.289310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:108128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.587 [2024-11-18 12:05:47.289348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.587 [2024-11-18 12:05:47.289379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:108136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.587 [2024-11-18 12:05:47.289406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.587 [2024-11-18 12:05:47.289433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:108144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.587 [2024-11-18 12:05:47.289459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.587 [2024-11-18 12:05:47.289502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:108152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.587 [2024-11-18 12:05:47.289545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.587 [2024-11-18 12:05:47.289571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:108160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.587 [2024-11-18 12:05:47.289593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.587 [2024-11-18 12:05:47.289617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:108168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.587 [2024-11-18 12:05:47.289655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.587 [2024-11-18 12:05:47.289682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:108176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.587 [2024-11-18 12:05:47.289704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.587 [2024-11-18 12:05:47.289728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:108184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.587 [2024-11-18 12:05:47.289750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.587 [2024-11-18 12:05:47.289803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.587 [2024-11-18 12:05:47.289828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.587 [2024-11-18 12:05:47.289855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:108200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.587 [2024-11-18 12:05:47.289879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.587 [2024-11-18 12:05:47.289906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:108208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.587 [2024-11-18 12:05:47.289930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.587 [2024-11-18 12:05:47.289958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:108216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.587 [2024-11-18 12:05:47.289983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.587 [2024-11-18 12:05:47.290009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:108224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.587 [2024-11-18 12:05:47.290034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.587 [2024-11-18 12:05:47.290066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:108232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.587 [2024-11-18 12:05:47.290092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.587 [2024-11-18 12:05:47.290119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:108240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.587 [2024-11-18 12:05:47.290144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.587 [2024-11-18 12:05:47.290170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:108248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.587 [2024-11-18 12:05:47.290195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.587 [2024-11-18 12:05:47.290221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.587 [2024-11-18 12:05:47.290246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.587 [2024-11-18 12:05:47.290273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:108264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.587 [2024-11-18 12:05:47.290298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.587 [2024-11-18 12:05:47.290324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:108272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.587 [2024-11-18 12:05:47.290349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.587 [2024-11-18 12:05:47.290375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:108280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.587 [2024-11-18 12:05:47.290400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.587 [2024-11-18 12:05:47.290426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.587 [2024-11-18 12:05:47.290451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.587 [2024-11-18 12:05:47.290487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:108296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.587 [2024-11-18 12:05:47.290522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.587 [2024-11-18 12:05:47.290564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:108304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.587 [2024-11-18 12:05:47.290588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.587 [2024-11-18 12:05:47.290612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:108312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.587 [2024-11-18 12:05:47.290634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.587 [2024-11-18 12:05:47.290659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.587 [2024-11-18 12:05:47.290681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.587 [2024-11-18 12:05:47.290705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:108328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.587 [2024-11-18 12:05:47.290732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.587 [2024-11-18 12:05:47.290757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:108336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.587 [2024-11-18 12:05:47.290796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.587 [2024-11-18 12:05:47.290824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:108344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.588 [2024-11-18 12:05:47.290849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.588 [2024-11-18 12:05:47.290876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:108352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.588 [2024-11-18 12:05:47.290901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.588 [2024-11-18 12:05:47.290927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:108360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.588 [2024-11-18 12:05:47.290952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.588 [2024-11-18 12:05:47.290978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:108368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.588 [2024-11-18 12:05:47.291002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.588 [2024-11-18 12:05:47.291029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:108376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.588 [2024-11-18 12:05:47.291054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.588 [2024-11-18 12:05:47.291080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:108384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.588 [2024-11-18 12:05:47.291105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.588 [2024-11-18 12:05:47.291131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:108392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.588 [2024-11-18 12:05:47.291155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.588 [2024-11-18 12:05:47.291182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.588 [2024-11-18 12:05:47.291207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.588 [2024-11-18 12:05:47.291234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:108408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.588 [2024-11-18 12:05:47.291258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.588 [2024-11-18 12:05:47.291284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:108416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.588 [2024-11-18 12:05:47.291309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.588 [2024-11-18 12:05:47.291335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:108424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.588 [2024-11-18 12:05:47.291360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.588 [2024-11-18 12:05:47.291386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:108432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.588 [2024-11-18 12:05:47.291415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.588 [2024-11-18 12:05:47.291442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:108440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.588 [2024-11-18 12:05:47.291466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.588 [2024-11-18 12:05:47.291514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:108448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.588 [2024-11-18 12:05:47.291557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.588 [2024-11-18 12:05:47.291582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:108456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.588 [2024-11-18 12:05:47.291604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.588 [2024-11-18 12:05:47.291628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:108464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.588 [2024-11-18 12:05:47.291650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.588 [2024-11-18 12:05:47.291674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:108472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.588 [2024-11-18 12:05:47.291696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.588 [2024-11-18 12:05:47.291720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:108480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.588 [2024-11-18 12:05:47.291742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.588 [2024-11-18 12:05:47.291767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.588 [2024-11-18 12:05:47.291808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.588 [2024-11-18 12:05:47.291830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:108496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.588 [2024-11-18 12:05:47.291880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.588 [2024-11-18 12:05:47.291917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:108504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.588 [2024-11-18 12:05:47.291942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.588 [2024-11-18 12:05:47.291968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:108512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.588 [2024-11-18 12:05:47.291993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.588 [2024-11-18 12:05:47.292020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:108520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.588 [2024-11-18 12:05:47.292044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.588 [2024-11-18 12:05:47.292070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:108528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.588 [2024-11-18 12:05:47.292095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.588 [2024-11-18 12:05:47.292127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:108536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.588 [2024-11-18 12:05:47.292152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.588 [2024-11-18 12:05:47.292179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:108544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.588 [2024-11-18 12:05:47.292203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.588 [2024-11-18 12:05:47.292230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:108552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.588 [2024-11-18 12:05:47.292255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.588 [2024-11-18 12:05:47.292281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:108560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.588 [2024-11-18 12:05:47.292306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.588 [2024-11-18 12:05:47.292332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:108568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.588 [2024-11-18 12:05:47.292357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.588 [2024-11-18 12:05:47.292395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:108576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.588 [2024-11-18 12:05:47.292420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.588 [2024-11-18 12:05:47.292453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:108584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.588 [2024-11-18 12:05:47.292478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.588 [2024-11-18 12:05:47.292526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:108592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.588 [2024-11-18 12:05:47.292568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.588 [2024-11-18 12:05:47.292594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:108600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.588 [2024-11-18 12:05:47.292617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.588 [2024-11-18 12:05:47.292641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:108608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.588 [2024-11-18 12:05:47.292663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.588 [2024-11-18 12:05:47.292687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:108616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.588 [2024-11-18 12:05:47.292709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.588 [2024-11-18 12:05:47.292734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:108624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.589 [2024-11-18 12:05:47.292756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.589 [2024-11-18 12:05:47.292801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:108632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.589 [2024-11-18 12:05:47.292831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.589 [2024-11-18 12:05:47.292865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:108640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.589 [2024-11-18 12:05:47.292890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.589 [2024-11-18 12:05:47.292917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:108648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.589 [2024-11-18 12:05:47.292941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.589 [2024-11-18 12:05:47.292967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:108656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.589 [2024-11-18 12:05:47.292992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.589 [2024-11-18 12:05:47.293018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:108664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.589 [2024-11-18 12:05:47.293043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.589 [2024-11-18 12:05:47.293069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:108672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.589 [2024-11-18 12:05:47.293094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.589 [2024-11-18 12:05:47.293120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:108680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.589 [2024-11-18 12:05:47.293158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.589 [2024-11-18 12:05:47.293187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:108688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.589 [2024-11-18 12:05:47.293212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.589 [2024-11-18 12:05:47.293238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:108696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.589 [2024-11-18 12:05:47.293263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.589 [2024-11-18 12:05:47.293290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:108704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.589 [2024-11-18 12:05:47.293314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.589 [2024-11-18 12:05:47.293340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:108712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.589 [2024-11-18 12:05:47.293365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.589 [2024-11-18 12:05:47.293391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:108720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.589 [2024-11-18 12:05:47.293416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.589 [2024-11-18 12:05:47.293443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:108728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.589 [2024-11-18 12:05:47.293468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.589 [2024-11-18 12:05:47.293519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:108736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.589 [2024-11-18 12:05:47.293562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.589 [2024-11-18 12:05:47.293587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:108744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.589 [2024-11-18 12:05:47.293609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.589 [2024-11-18 12:05:47.293634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:108752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.589 [2024-11-18 12:05:47.293656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.589 [2024-11-18 12:05:47.293680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:108760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.589 [2024-11-18 12:05:47.293702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.589 [2024-11-18 12:05:47.293726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.589 [2024-11-18 12:05:47.293748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.589 [2024-11-18 12:05:47.293799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:108776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.589 [2024-11-18 12:05:47.293825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.589 [2024-11-18 12:05:47.293852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:108784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.589 [2024-11-18 12:05:47.293876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.589 [2024-11-18 12:05:47.293903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:108792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.589 [2024-11-18 12:05:47.293927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.589 [2024-11-18 12:05:47.293954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:108800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.589 [2024-11-18 12:05:47.293979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.589 [2024-11-18 12:05:47.294006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:108808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.589 [2024-11-18 12:05:47.294031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.589 [2024-11-18 12:05:47.294057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:108816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.589 [2024-11-18 12:05:47.294082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.589 [2024-11-18 12:05:47.294109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:108824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.589 [2024-11-18 12:05:47.294134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.589 [2024-11-18 12:05:47.294161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:108832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.589 [2024-11-18 12:05:47.294190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.589 [2024-11-18 12:05:47.294218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:108840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.589 [2024-11-18 12:05:47.294244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.589 [2024-11-18 12:05:47.294271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:108848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.589 [2024-11-18 12:05:47.294296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.589 [2024-11-18 12:05:47.294322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:108856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.589 [2024-11-18 12:05:47.294347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.589 [2024-11-18 12:05:47.294374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:108864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.589 [2024-11-18 12:05:47.294399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.589 [2024-11-18 12:05:47.294426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:108872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.589 [2024-11-18 12:05:47.294451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.589 [2024-11-18 12:05:47.294486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:108880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.589 [2024-11-18 12:05:47.294521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.590 [2024-11-18 12:05:47.294562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.590 [2024-11-18 12:05:47.294585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.590 [2024-11-18 12:05:47.294609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:108896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.590 [2024-11-18 12:05:47.294632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.590 [2024-11-18 12:05:47.294656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:108904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.590 [2024-11-18 12:05:47.294679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.590 [2024-11-18 12:05:47.294703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:108912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.590 [2024-11-18 12:05:47.294726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.590 [2024-11-18 12:05:47.294749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.590 [2024-11-18 12:05:47.294787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.590 [2024-11-18 12:05:47.294816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:108928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.590 [2024-11-18 12:05:47.294841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.590 [2024-11-18 12:05:47.294877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:108936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.590 [2024-11-18 12:05:47.294903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.590 [2024-11-18 12:05:47.294930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:108944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.590 [2024-11-18 12:05:47.294955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.590 [2024-11-18 12:05:47.294982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:108952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.590 [2024-11-18 12:05:47.295006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.590 [2024-11-18 12:05:47.295032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:108960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.590 [2024-11-18 12:05:47.295056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.590 [2024-11-18 12:05:47.295083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:108968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.590 [2024-11-18 12:05:47.295107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.590 [2024-11-18 12:05:47.295134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:108976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.590 [2024-11-18 12:05:47.295158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.590 [2024-11-18 12:05:47.295184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:108984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.590 [2024-11-18 12:05:47.295208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.590 [2024-11-18 12:05:47.295235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:108992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.590 [2024-11-18 12:05:47.295260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.590 [2024-11-18 12:05:47.295286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.590 [2024-11-18 12:05:47.295311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.590 [2024-11-18 12:05:47.295337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:109008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.590 [2024-11-18 12:05:47.295362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.590 [2024-11-18 12:05:47.295389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.590 [2024-11-18 12:05:47.295413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.590 [2024-11-18 12:05:47.295445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:109024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.590 [2024-11-18 12:05:47.295469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.590 [2024-11-18 12:05:47.295518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:109032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.590 [2024-11-18 12:05:47.295572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.590 [2024-11-18 12:05:47.295598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:109040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.590 [2024-11-18 12:05:47.295621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.590 [2024-11-18 12:05:47.295645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:109048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.590 [2024-11-18 12:05:47.295668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.590 [2024-11-18 12:05:47.295692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.590 [2024-11-18 12:05:47.295715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.590 [2024-11-18 12:05:47.295739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.590 [2024-11-18 12:05:47.295761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.590 [2024-11-18 12:05:47.295803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:109072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.590 [2024-11-18 12:05:47.295829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.590 [2024-11-18 12:05:47.295855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:109080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.590 [2024-11-18 12:05:47.295891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.590 [2024-11-18 12:05:47.295918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:109088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.590 [2024-11-18 12:05:47.295942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.590 [2024-11-18 12:05:47.295969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:109096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.590 [2024-11-18 12:05:47.295995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.590 [2024-11-18 12:05:47.296022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.590 [2024-11-18 12:05:47.296046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.590 [2024-11-18 12:05:47.296070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:37:21.590 [2024-11-18 12:05:47.296108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:21.590 [2024-11-18 12:05:47.296130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:21.590 [2024-11-18 12:05:47.296152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109112 len:8 PRP1 0x0 PRP2 0x0 00:37:21.590 [2024-11-18 12:05:47.296176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.590 [2024-11-18 12:05:47.300839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.590 [2024-11-18 12:05:47.300976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.590 [2024-11-18 12:05:47.301791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.590 [2024-11-18 12:05:47.301846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.590 [2024-11-18 12:05:47.301875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.590 [2024-11-18 12:05:47.302176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.590 [2024-11-18 12:05:47.302473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.590 [2024-11-18 12:05:47.302527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.590 [2024-11-18 12:05:47.302570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.590 [2024-11-18 12:05:47.302593] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.590 [2024-11-18 12:05:47.315898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.590 [2024-11-18 12:05:47.316378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.590 [2024-11-18 12:05:47.316438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.591 [2024-11-18 12:05:47.316476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.591 [2024-11-18 12:05:47.316804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.591 [2024-11-18 12:05:47.317106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.591 [2024-11-18 12:05:47.317161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.591 [2024-11-18 12:05:47.317183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.591 [2024-11-18 12:05:47.317213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.591 [2024-11-18 12:05:47.330523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.591 [2024-11-18 12:05:47.331056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.591 [2024-11-18 12:05:47.331092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.591 [2024-11-18 12:05:47.331116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.591 [2024-11-18 12:05:47.331416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.591 [2024-11-18 12:05:47.331721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.591 [2024-11-18 12:05:47.331753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.591 [2024-11-18 12:05:47.331783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.591 [2024-11-18 12:05:47.331805] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.591 [2024-11-18 12:05:47.344969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.591 [2024-11-18 12:05:47.345423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.591 [2024-11-18 12:05:47.345470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.591 [2024-11-18 12:05:47.345517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.591 [2024-11-18 12:05:47.345809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.591 [2024-11-18 12:05:47.346102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.591 [2024-11-18 12:05:47.346133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.591 [2024-11-18 12:05:47.346163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.591 [2024-11-18 12:05:47.346185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.591 [2024-11-18 12:05:47.359589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.591 [2024-11-18 12:05:47.360092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.591 [2024-11-18 12:05:47.360128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.591 [2024-11-18 12:05:47.360151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.591 [2024-11-18 12:05:47.360444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.591 [2024-11-18 12:05:47.360746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.591 [2024-11-18 12:05:47.360779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.591 [2024-11-18 12:05:47.360803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.591 [2024-11-18 12:05:47.360825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.591 [2024-11-18 12:05:47.374161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.591 [2024-11-18 12:05:47.374647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.591 [2024-11-18 12:05:47.374689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.591 [2024-11-18 12:05:47.374716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.591 [2024-11-18 12:05:47.375002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.591 [2024-11-18 12:05:47.375291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.591 [2024-11-18 12:05:47.375322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.591 [2024-11-18 12:05:47.375344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.591 [2024-11-18 12:05:47.375367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.591 [2024-11-18 12:05:47.388799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.591 [2024-11-18 12:05:47.389267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.591 [2024-11-18 12:05:47.389309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.591 [2024-11-18 12:05:47.389335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.591 [2024-11-18 12:05:47.389636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.591 [2024-11-18 12:05:47.389932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.591 [2024-11-18 12:05:47.389963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.591 [2024-11-18 12:05:47.389986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.591 [2024-11-18 12:05:47.390008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.591 [2024-11-18 12:05:47.403357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.591 [2024-11-18 12:05:47.403862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.591 [2024-11-18 12:05:47.403913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.591 [2024-11-18 12:05:47.403939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.591 [2024-11-18 12:05:47.404225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.591 [2024-11-18 12:05:47.404527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.591 [2024-11-18 12:05:47.404559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.591 [2024-11-18 12:05:47.404581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.591 [2024-11-18 12:05:47.404603] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.591 [2024-11-18 12:05:47.417923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.591 [2024-11-18 12:05:47.418404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.591 [2024-11-18 12:05:47.418446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.591 [2024-11-18 12:05:47.418473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.591 [2024-11-18 12:05:47.418770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.591 [2024-11-18 12:05:47.419059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.591 [2024-11-18 12:05:47.419091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.591 [2024-11-18 12:05:47.419114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.591 [2024-11-18 12:05:47.419136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.591 [2024-11-18 12:05:47.432469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.592 [2024-11-18 12:05:47.432962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.592 [2024-11-18 12:05:47.433004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.592 [2024-11-18 12:05:47.433030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.592 [2024-11-18 12:05:47.433314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.592 [2024-11-18 12:05:47.433615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.592 [2024-11-18 12:05:47.433647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.592 [2024-11-18 12:05:47.433679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.592 [2024-11-18 12:05:47.433702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.592 [2024-11-18 12:05:47.447056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.592 [2024-11-18 12:05:47.447528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.592 [2024-11-18 12:05:47.447571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.592 [2024-11-18 12:05:47.447598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.592 [2024-11-18 12:05:47.447883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.592 [2024-11-18 12:05:47.448170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.592 [2024-11-18 12:05:47.448202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.592 [2024-11-18 12:05:47.448225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.592 [2024-11-18 12:05:47.448248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.592 [2024-11-18 12:05:47.461568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.592 [2024-11-18 12:05:47.462055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.592 [2024-11-18 12:05:47.462097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.592 [2024-11-18 12:05:47.462124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.592 [2024-11-18 12:05:47.462410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.592 [2024-11-18 12:05:47.462711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.592 [2024-11-18 12:05:47.462743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.592 [2024-11-18 12:05:47.462766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.592 [2024-11-18 12:05:47.462796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.851 [2024-11-18 12:05:47.476151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.851 [2024-11-18 12:05:47.476620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.851 [2024-11-18 12:05:47.476672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.851 [2024-11-18 12:05:47.476699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.851 [2024-11-18 12:05:47.476985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.851 [2024-11-18 12:05:47.477271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.851 [2024-11-18 12:05:47.477303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.851 [2024-11-18 12:05:47.477325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.851 [2024-11-18 12:05:47.477347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.851 [2024-11-18 12:05:47.490670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.851 [2024-11-18 12:05:47.491108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.851 [2024-11-18 12:05:47.491150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.851 [2024-11-18 12:05:47.491177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.851 [2024-11-18 12:05:47.491462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.851 [2024-11-18 12:05:47.491759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.851 [2024-11-18 12:05:47.491802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.851 [2024-11-18 12:05:47.491827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.851 [2024-11-18 12:05:47.491859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.851 [2024-11-18 12:05:47.505196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.851 [2024-11-18 12:05:47.505666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.851 [2024-11-18 12:05:47.505708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.851 [2024-11-18 12:05:47.505735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.851 [2024-11-18 12:05:47.506021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.852 [2024-11-18 12:05:47.506309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.852 [2024-11-18 12:05:47.506340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.852 [2024-11-18 12:05:47.506364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.852 [2024-11-18 12:05:47.506386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.852 [2024-11-18 12:05:47.519731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.852 [2024-11-18 12:05:47.520303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.852 [2024-11-18 12:05:47.520374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.852 [2024-11-18 12:05:47.520400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.852 [2024-11-18 12:05:47.520698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.852 [2024-11-18 12:05:47.520987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.852 [2024-11-18 12:05:47.521018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.852 [2024-11-18 12:05:47.521042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.852 [2024-11-18 12:05:47.521079] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.852 [2024-11-18 12:05:47.534197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.852 [2024-11-18 12:05:47.534686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.852 [2024-11-18 12:05:47.534733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.852 [2024-11-18 12:05:47.534760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.852 [2024-11-18 12:05:47.535045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.852 [2024-11-18 12:05:47.535333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.852 [2024-11-18 12:05:47.535364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.852 [2024-11-18 12:05:47.535387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.852 [2024-11-18 12:05:47.535409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.852 [2024-11-18 12:05:47.548707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.852 [2024-11-18 12:05:47.549178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.852 [2024-11-18 12:05:47.549221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.852 [2024-11-18 12:05:47.549248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.852 [2024-11-18 12:05:47.549548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.852 [2024-11-18 12:05:47.549837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.852 [2024-11-18 12:05:47.549868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.852 [2024-11-18 12:05:47.549891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.852 [2024-11-18 12:05:47.549914] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.852 [2024-11-18 12:05:47.563273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.852 [2024-11-18 12:05:47.563744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.852 [2024-11-18 12:05:47.563786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.852 [2024-11-18 12:05:47.563813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.852 [2024-11-18 12:05:47.564099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.852 [2024-11-18 12:05:47.564387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.852 [2024-11-18 12:05:47.564419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.852 [2024-11-18 12:05:47.564442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.852 [2024-11-18 12:05:47.564463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.852 [2024-11-18 12:05:47.577855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.852 [2024-11-18 12:05:47.578297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.852 [2024-11-18 12:05:47.578345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.852 [2024-11-18 12:05:47.578372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.852 [2024-11-18 12:05:47.578675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.852 [2024-11-18 12:05:47.578962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.852 [2024-11-18 12:05:47.578994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.852 [2024-11-18 12:05:47.579017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.852 [2024-11-18 12:05:47.579039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.852 [2024-11-18 12:05:47.592347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.852 [2024-11-18 12:05:47.592865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.852 [2024-11-18 12:05:47.592908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.852 [2024-11-18 12:05:47.592934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.852 [2024-11-18 12:05:47.593219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.852 [2024-11-18 12:05:47.593516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.852 [2024-11-18 12:05:47.593548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.852 [2024-11-18 12:05:47.593572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.852 [2024-11-18 12:05:47.593595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.852 [2024-11-18 12:05:47.606912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.852 [2024-11-18 12:05:47.607357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.852 [2024-11-18 12:05:47.607408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.852 [2024-11-18 12:05:47.607434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.852 [2024-11-18 12:05:47.607731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.852 [2024-11-18 12:05:47.608018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.852 [2024-11-18 12:05:47.608050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.852 [2024-11-18 12:05:47.608073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.852 [2024-11-18 12:05:47.608096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.852 [2024-11-18 12:05:47.621411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.852 [2024-11-18 12:05:47.621890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.852 [2024-11-18 12:05:47.621932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.852 [2024-11-18 12:05:47.621958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.852 [2024-11-18 12:05:47.622243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.852 [2024-11-18 12:05:47.622544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.852 [2024-11-18 12:05:47.622593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.852 [2024-11-18 12:05:47.622617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.852 [2024-11-18 12:05:47.622639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.852 [2024-11-18 12:05:47.635950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.852 [2024-11-18 12:05:47.636424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.852 [2024-11-18 12:05:47.636471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.852 [2024-11-18 12:05:47.636510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.852 [2024-11-18 12:05:47.636797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.852 [2024-11-18 12:05:47.637085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.852 [2024-11-18 12:05:47.637117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.852 [2024-11-18 12:05:47.637140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.852 [2024-11-18 12:05:47.637162] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.852 [2024-11-18 12:05:47.650484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.852 [2024-11-18 12:05:47.650953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.852 [2024-11-18 12:05:47.650994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.853 [2024-11-18 12:05:47.651020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.853 [2024-11-18 12:05:47.651305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.853 [2024-11-18 12:05:47.651605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.853 [2024-11-18 12:05:47.651637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.853 [2024-11-18 12:05:47.651660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.853 [2024-11-18 12:05:47.651682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.853 [2024-11-18 12:05:47.664973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.853 [2024-11-18 12:05:47.665441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.853 [2024-11-18 12:05:47.665482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.853 [2024-11-18 12:05:47.665521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.853 [2024-11-18 12:05:47.665808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.853 [2024-11-18 12:05:47.666095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.853 [2024-11-18 12:05:47.666127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.853 [2024-11-18 12:05:47.666149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.853 [2024-11-18 12:05:47.666177] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.853 [2024-11-18 12:05:47.679524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.853 [2024-11-18 12:05:47.680079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.853 [2024-11-18 12:05:47.680139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.853 [2024-11-18 12:05:47.680166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.853 [2024-11-18 12:05:47.680452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.853 [2024-11-18 12:05:47.680751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.853 [2024-11-18 12:05:47.680784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.853 [2024-11-18 12:05:47.680813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.853 [2024-11-18 12:05:47.680834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.853 [2024-11-18 12:05:47.694187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.853 [2024-11-18 12:05:47.694631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.853 [2024-11-18 12:05:47.694681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.853 [2024-11-18 12:05:47.694708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.853 [2024-11-18 12:05:47.694993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.853 [2024-11-18 12:05:47.695280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.853 [2024-11-18 12:05:47.695312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.853 [2024-11-18 12:05:47.695335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.853 [2024-11-18 12:05:47.695357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.853 [2024-11-18 12:05:47.708674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.853 [2024-11-18 12:05:47.709216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.853 [2024-11-18 12:05:47.709279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.853 [2024-11-18 12:05:47.709306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.853 [2024-11-18 12:05:47.709604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.853 [2024-11-18 12:05:47.709892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.853 [2024-11-18 12:05:47.709924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.853 [2024-11-18 12:05:47.709948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.853 [2024-11-18 12:05:47.709969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.853 [2024-11-18 12:05:47.723269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.853 [2024-11-18 12:05:47.723783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.853 [2024-11-18 12:05:47.723824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.853 [2024-11-18 12:05:47.723851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.853 [2024-11-18 12:05:47.724137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.853 [2024-11-18 12:05:47.724425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.853 [2024-11-18 12:05:47.724456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.853 [2024-11-18 12:05:47.724479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.853 [2024-11-18 12:05:47.724513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.112 [2024-11-18 12:05:47.737818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.112 [2024-11-18 12:05:47.738280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-11-18 12:05:47.738332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.112 [2024-11-18 12:05:47.738358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.112 [2024-11-18 12:05:47.738657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.112 [2024-11-18 12:05:47.738944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.112 [2024-11-18 12:05:47.738976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.112 [2024-11-18 12:05:47.738999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.112 [2024-11-18 12:05:47.739021] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.112 [2024-11-18 12:05:47.752317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.112 [2024-11-18 12:05:47.752806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-11-18 12:05:47.752854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.112 [2024-11-18 12:05:47.752880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.112 [2024-11-18 12:05:47.753165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.112 [2024-11-18 12:05:47.753452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.112 [2024-11-18 12:05:47.753483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.112 [2024-11-18 12:05:47.753520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.112 [2024-11-18 12:05:47.753543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.112 [2024-11-18 12:05:47.766865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.112 [2024-11-18 12:05:47.767335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-11-18 12:05:47.767386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.112 [2024-11-18 12:05:47.767419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.112 [2024-11-18 12:05:47.767718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.112 [2024-11-18 12:05:47.768005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.112 [2024-11-18 12:05:47.768037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.112 [2024-11-18 12:05:47.768060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.112 [2024-11-18 12:05:47.768082] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.112 [2024-11-18 12:05:47.781415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.112 [2024-11-18 12:05:47.781906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-11-18 12:05:47.781959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.112 [2024-11-18 12:05:47.781985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.112 [2024-11-18 12:05:47.782270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.112 [2024-11-18 12:05:47.782572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.112 [2024-11-18 12:05:47.782604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.112 [2024-11-18 12:05:47.782627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.112 [2024-11-18 12:05:47.782649] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.112 [2024-11-18 12:05:47.795953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.112 [2024-11-18 12:05:47.796403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.112 [2024-11-18 12:05:47.796453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.113 [2024-11-18 12:05:47.796480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.113 [2024-11-18 12:05:47.796778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.113 [2024-11-18 12:05:47.797066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.113 [2024-11-18 12:05:47.797097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.113 [2024-11-18 12:05:47.797120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.113 [2024-11-18 12:05:47.797142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.113 [2024-11-18 12:05:47.810456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.113 [2024-11-18 12:05:47.810912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-11-18 12:05:47.810955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.113 [2024-11-18 12:05:47.810981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.113 [2024-11-18 12:05:47.811276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.113 [2024-11-18 12:05:47.811576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.113 [2024-11-18 12:05:47.811608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.113 [2024-11-18 12:05:47.811631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.113 [2024-11-18 12:05:47.811653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.113 [2024-11-18 12:05:47.825134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.113 [2024-11-18 12:05:47.825612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-11-18 12:05:47.825655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.113 [2024-11-18 12:05:47.825683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.113 [2024-11-18 12:05:47.825969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.113 [2024-11-18 12:05:47.826257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.113 [2024-11-18 12:05:47.826288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.113 [2024-11-18 12:05:47.826311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.113 [2024-11-18 12:05:47.826334] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.113 [2024-11-18 12:05:47.839650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.113 [2024-11-18 12:05:47.840108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-11-18 12:05:47.840149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.113 [2024-11-18 12:05:47.840175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.113 [2024-11-18 12:05:47.840460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.113 [2024-11-18 12:05:47.840758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.113 [2024-11-18 12:05:47.840791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.113 [2024-11-18 12:05:47.840814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.113 [2024-11-18 12:05:47.840836] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.113 [2024-11-18 12:05:47.854123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.113 [2024-11-18 12:05:47.854589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-11-18 12:05:47.854631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.113 [2024-11-18 12:05:47.854657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.113 [2024-11-18 12:05:47.854944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.113 [2024-11-18 12:05:47.855230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.113 [2024-11-18 12:05:47.855268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.113 [2024-11-18 12:05:47.855301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.113 [2024-11-18 12:05:47.855323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.113 [2024-11-18 12:05:47.868632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.113 [2024-11-18 12:05:47.869106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-11-18 12:05:47.869156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.113 [2024-11-18 12:05:47.869183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.113 [2024-11-18 12:05:47.869469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.113 [2024-11-18 12:05:47.869774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.113 [2024-11-18 12:05:47.869811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.113 [2024-11-18 12:05:47.869834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.113 [2024-11-18 12:05:47.869857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.113 [2024-11-18 12:05:47.883256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.113 [2024-11-18 12:05:47.883721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-11-18 12:05:47.883763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.113 [2024-11-18 12:05:47.883790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.113 [2024-11-18 12:05:47.884081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.113 [2024-11-18 12:05:47.884369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.113 [2024-11-18 12:05:47.884400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.113 [2024-11-18 12:05:47.884422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.113 [2024-11-18 12:05:47.884444] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.113 [2024-11-18 12:05:47.897796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.113 [2024-11-18 12:05:47.898278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-11-18 12:05:47.898330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.113 [2024-11-18 12:05:47.898357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.113 [2024-11-18 12:05:47.898664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.113 [2024-11-18 12:05:47.898953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.113 [2024-11-18 12:05:47.898984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.113 [2024-11-18 12:05:47.899007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.113 [2024-11-18 12:05:47.899035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.113 [2024-11-18 12:05:47.912338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.113 [2024-11-18 12:05:47.912857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-11-18 12:05:47.912906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.113 [2024-11-18 12:05:47.912932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.113 [2024-11-18 12:05:47.913225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.113 [2024-11-18 12:05:47.913530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.113 [2024-11-18 12:05:47.913562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.113 [2024-11-18 12:05:47.913584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.113 [2024-11-18 12:05:47.913606] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.113 [2024-11-18 12:05:47.926933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.113 [2024-11-18 12:05:47.927378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.113 [2024-11-18 12:05:47.927419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.113 [2024-11-18 12:05:47.927497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.113 [2024-11-18 12:05:47.927796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.113 [2024-11-18 12:05:47.928083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.113 [2024-11-18 12:05:47.928128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.113 [2024-11-18 12:05:47.928151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.113 [2024-11-18 12:05:47.928174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.114 [2024-11-18 12:05:47.941522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.114 [2024-11-18 12:05:47.942024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-11-18 12:05:47.942075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.114 [2024-11-18 12:05:47.942115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.114 [2024-11-18 12:05:47.942401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.114 [2024-11-18 12:05:47.942720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.114 [2024-11-18 12:05:47.942752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.114 [2024-11-18 12:05:47.942775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.114 [2024-11-18 12:05:47.942796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.114 [2024-11-18 12:05:47.956101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.114 [2024-11-18 12:05:47.956535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-11-18 12:05:47.956577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.114 [2024-11-18 12:05:47.956604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.114 [2024-11-18 12:05:47.956890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.114 [2024-11-18 12:05:47.957177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.114 [2024-11-18 12:05:47.957208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.114 [2024-11-18 12:05:47.957231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.114 [2024-11-18 12:05:47.957253] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.114 [2024-11-18 12:05:47.970617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.114 [2024-11-18 12:05:47.971120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-11-18 12:05:47.971168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.114 [2024-11-18 12:05:47.971195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.114 [2024-11-18 12:05:47.971480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.114 [2024-11-18 12:05:47.971787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.114 [2024-11-18 12:05:47.971819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.114 [2024-11-18 12:05:47.971842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.114 [2024-11-18 12:05:47.971864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.114 [2024-11-18 12:05:47.985235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.114 [2024-11-18 12:05:47.985704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.114 [2024-11-18 12:05:47.985754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.114 [2024-11-18 12:05:47.985780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.114 [2024-11-18 12:05:47.986064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.114 [2024-11-18 12:05:47.986352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.114 [2024-11-18 12:05:47.986383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.114 [2024-11-18 12:05:47.986407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.114 [2024-11-18 12:05:47.986429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.373 [2024-11-18 12:05:47.999771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.373 [2024-11-18 12:05:48.000246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.373 [2024-11-18 12:05:48.000297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.373 [2024-11-18 12:05:48.000329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.373 [2024-11-18 12:05:48.000630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.373 [2024-11-18 12:05:48.000918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.373 [2024-11-18 12:05:48.000950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.373 [2024-11-18 12:05:48.000973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.373 [2024-11-18 12:05:48.000996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.373 [2024-11-18 12:05:48.014347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.373 [2024-11-18 12:05:48.014833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.373 [2024-11-18 12:05:48.014884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.373 [2024-11-18 12:05:48.014911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.373 [2024-11-18 12:05:48.015209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.373 [2024-11-18 12:05:48.015508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.373 [2024-11-18 12:05:48.015539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.373 [2024-11-18 12:05:48.015563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.373 [2024-11-18 12:05:48.015585] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.373 [2024-11-18 12:05:48.028909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.373 [2024-11-18 12:05:48.029366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.373 [2024-11-18 12:05:48.029432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.373 [2024-11-18 12:05:48.029459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.373 [2024-11-18 12:05:48.029759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.373 [2024-11-18 12:05:48.030059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.373 [2024-11-18 12:05:48.030091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.373 [2024-11-18 12:05:48.030113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.373 [2024-11-18 12:05:48.030136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.373 [2024-11-18 12:05:48.043457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.373 [2024-11-18 12:05:48.043924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.373 [2024-11-18 12:05:48.043975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.373 [2024-11-18 12:05:48.044002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.373 [2024-11-18 12:05:48.044288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.373 [2024-11-18 12:05:48.044596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.373 [2024-11-18 12:05:48.044629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.373 [2024-11-18 12:05:48.044652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.373 [2024-11-18 12:05:48.044673] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.373 [2024-11-18 12:05:48.057968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.373 [2024-11-18 12:05:48.058439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.373 [2024-11-18 12:05:48.058502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.373 [2024-11-18 12:05:48.058532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.373 [2024-11-18 12:05:48.058817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.373 [2024-11-18 12:05:48.059104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.373 [2024-11-18 12:05:48.059135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.373 [2024-11-18 12:05:48.059159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.373 [2024-11-18 12:05:48.059181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.373 [2024-11-18 12:05:48.072507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.373 [2024-11-18 12:05:48.072956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.373 [2024-11-18 12:05:48.073006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.373 [2024-11-18 12:05:48.073034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.373 [2024-11-18 12:05:48.073319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.373 [2024-11-18 12:05:48.073619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.373 [2024-11-18 12:05:48.073651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.373 [2024-11-18 12:05:48.073674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.373 [2024-11-18 12:05:48.073696] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.373 4504.00 IOPS, 17.59 MiB/s [2024-11-18T11:05:48.258Z] [2024-11-18 12:05:48.086936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.373 [2024-11-18 12:05:48.087395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.373 [2024-11-18 12:05:48.087446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.373 [2024-11-18 12:05:48.087473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.373 [2024-11-18 12:05:48.087778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.374 [2024-11-18 12:05:48.088071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.374 [2024-11-18 12:05:48.088103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.374 [2024-11-18 12:05:48.088135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.374 [2024-11-18 12:05:48.088158] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.374 [2024-11-18 12:05:48.101531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.374 [2024-11-18 12:05:48.102017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.374 [2024-11-18 12:05:48.102069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.374 [2024-11-18 12:05:48.102096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.374 [2024-11-18 12:05:48.102382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.374 [2024-11-18 12:05:48.102682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.374 [2024-11-18 12:05:48.102715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.374 [2024-11-18 12:05:48.102743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.374 [2024-11-18 12:05:48.102765] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.374 [2024-11-18 12:05:48.116073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.374 [2024-11-18 12:05:48.116541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.374 [2024-11-18 12:05:48.116583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.374 [2024-11-18 12:05:48.116609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.374 [2024-11-18 12:05:48.116895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.374 [2024-11-18 12:05:48.117182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.374 [2024-11-18 12:05:48.117215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.374 [2024-11-18 12:05:48.117238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.374 [2024-11-18 12:05:48.117261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.374 [2024-11-18 12:05:48.130603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.374 [2024-11-18 12:05:48.131065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.374 [2024-11-18 12:05:48.131107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.374 [2024-11-18 12:05:48.131134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.374 [2024-11-18 12:05:48.131419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.374 [2024-11-18 12:05:48.131717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.374 [2024-11-18 12:05:48.131749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.374 [2024-11-18 12:05:48.131773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.374 [2024-11-18 12:05:48.131801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.374 [2024-11-18 12:05:48.145184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.374 [2024-11-18 12:05:48.145653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.374 [2024-11-18 12:05:48.145695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.374 [2024-11-18 12:05:48.145722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.374 [2024-11-18 12:05:48.146010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.374 [2024-11-18 12:05:48.146315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.374 [2024-11-18 12:05:48.146348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.374 [2024-11-18 12:05:48.146372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.374 [2024-11-18 12:05:48.146395] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.374 [2024-11-18 12:05:48.159765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.374 [2024-11-18 12:05:48.160284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.374 [2024-11-18 12:05:48.160326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.374 [2024-11-18 12:05:48.160353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.374 [2024-11-18 12:05:48.160650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.374 [2024-11-18 12:05:48.160939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.374 [2024-11-18 12:05:48.160972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.374 [2024-11-18 12:05:48.160995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.374 [2024-11-18 12:05:48.161018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.374 [2024-11-18 12:05:48.174362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.374 [2024-11-18 12:05:48.174920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.374 [2024-11-18 12:05:48.174962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.374 [2024-11-18 12:05:48.174989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.374 [2024-11-18 12:05:48.175275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.374 [2024-11-18 12:05:48.175580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.374 [2024-11-18 12:05:48.175613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.374 [2024-11-18 12:05:48.175636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.374 [2024-11-18 12:05:48.175659] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.374 [2024-11-18 12:05:48.188996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.374 [2024-11-18 12:05:48.189454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.374 [2024-11-18 12:05:48.189506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.374 [2024-11-18 12:05:48.189536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.374 [2024-11-18 12:05:48.189822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.374 [2024-11-18 12:05:48.190110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.374 [2024-11-18 12:05:48.190144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.374 [2024-11-18 12:05:48.190167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.374 [2024-11-18 12:05:48.190190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.374 [2024-11-18 12:05:48.203536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.374 [2024-11-18 12:05:48.203988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.374 [2024-11-18 12:05:48.204030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.374 [2024-11-18 12:05:48.204056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.374 [2024-11-18 12:05:48.204343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.374 [2024-11-18 12:05:48.204644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.374 [2024-11-18 12:05:48.204676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.374 [2024-11-18 12:05:48.204699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.374 [2024-11-18 12:05:48.204721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.374 [2024-11-18 12:05:48.218086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.374 [2024-11-18 12:05:48.218581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.374 [2024-11-18 12:05:48.218624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.374 [2024-11-18 12:05:48.218652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.374 [2024-11-18 12:05:48.218939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.374 [2024-11-18 12:05:48.219228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.374 [2024-11-18 12:05:48.219261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.374 [2024-11-18 12:05:48.219283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.374 [2024-11-18 12:05:48.219305] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.374 [2024-11-18 12:05:48.232650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.374 [2024-11-18 12:05:48.233103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.374 [2024-11-18 12:05:48.233145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.375 [2024-11-18 12:05:48.233172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.375 [2024-11-18 12:05:48.233465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.375 [2024-11-18 12:05:48.233769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.375 [2024-11-18 12:05:48.233802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.375 [2024-11-18 12:05:48.233825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.375 [2024-11-18 12:05:48.233848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.375 [2024-11-18 12:05:48.247160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.375 [2024-11-18 12:05:48.247616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.375 [2024-11-18 12:05:48.247659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.375 [2024-11-18 12:05:48.247685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.375 [2024-11-18 12:05:48.247972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.375 [2024-11-18 12:05:48.248258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.375 [2024-11-18 12:05:48.248291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.375 [2024-11-18 12:05:48.248314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.375 [2024-11-18 12:05:48.248337] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.634 [2024-11-18 12:05:48.261650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.634 [2024-11-18 12:05:48.262180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.634 [2024-11-18 12:05:48.262251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.634 [2024-11-18 12:05:48.262278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.634 [2024-11-18 12:05:48.262581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.634 [2024-11-18 12:05:48.262867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.634 [2024-11-18 12:05:48.262900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.634 [2024-11-18 12:05:48.262924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.634 [2024-11-18 12:05:48.262947] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.634 [2024-11-18 12:05:48.276282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.634 [2024-11-18 12:05:48.276745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.634 [2024-11-18 12:05:48.276788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.634 [2024-11-18 12:05:48.276815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.634 [2024-11-18 12:05:48.277101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.634 [2024-11-18 12:05:48.277395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.634 [2024-11-18 12:05:48.277428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.634 [2024-11-18 12:05:48.277451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.634 [2024-11-18 12:05:48.277475] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.634 [2024-11-18 12:05:48.290816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.634 [2024-11-18 12:05:48.291295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.634 [2024-11-18 12:05:48.291337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.634 [2024-11-18 12:05:48.291363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.634 [2024-11-18 12:05:48.291662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.634 [2024-11-18 12:05:48.291949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.634 [2024-11-18 12:05:48.291982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.634 [2024-11-18 12:05:48.292006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.634 [2024-11-18 12:05:48.292029] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.634 [2024-11-18 12:05:48.305320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.634 [2024-11-18 12:05:48.305784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.634 [2024-11-18 12:05:48.305827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.634 [2024-11-18 12:05:48.305854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.634 [2024-11-18 12:05:48.306140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.634 [2024-11-18 12:05:48.306430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.634 [2024-11-18 12:05:48.306462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.634 [2024-11-18 12:05:48.306484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.634 [2024-11-18 12:05:48.306534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.634 [2024-11-18 12:05:48.319854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.634 [2024-11-18 12:05:48.320312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.634 [2024-11-18 12:05:48.320353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.634 [2024-11-18 12:05:48.320379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.634 [2024-11-18 12:05:48.320678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.634 [2024-11-18 12:05:48.320966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.634 [2024-11-18 12:05:48.320998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.634 [2024-11-18 12:05:48.321027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.634 [2024-11-18 12:05:48.321050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.634 [2024-11-18 12:05:48.334360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.634 [2024-11-18 12:05:48.334878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.634 [2024-11-18 12:05:48.334919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.634 [2024-11-18 12:05:48.334946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.634 [2024-11-18 12:05:48.335230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.634 [2024-11-18 12:05:48.335529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.634 [2024-11-18 12:05:48.335561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.634 [2024-11-18 12:05:48.335585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.634 [2024-11-18 12:05:48.335607] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.634 [2024-11-18 12:05:48.348913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.634 [2024-11-18 12:05:48.349375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.634 [2024-11-18 12:05:48.349416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.634 [2024-11-18 12:05:48.349443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.634 [2024-11-18 12:05:48.349739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.634 [2024-11-18 12:05:48.350027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.634 [2024-11-18 12:05:48.350058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.634 [2024-11-18 12:05:48.350122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.634 [2024-11-18 12:05:48.350146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.634 [2024-11-18 12:05:48.363444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.634 [2024-11-18 12:05:48.363934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.634 [2024-11-18 12:05:48.363977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.634 [2024-11-18 12:05:48.364004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.634 [2024-11-18 12:05:48.364289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.634 [2024-11-18 12:05:48.364590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.634 [2024-11-18 12:05:48.364623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.634 [2024-11-18 12:05:48.364645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.634 [2024-11-18 12:05:48.364668] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.634 [2024-11-18 12:05:48.378028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.634 [2024-11-18 12:05:48.378481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.635 [2024-11-18 12:05:48.378535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.635 [2024-11-18 12:05:48.378562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.635 [2024-11-18 12:05:48.378848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.635 [2024-11-18 12:05:48.379161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.635 [2024-11-18 12:05:48.379194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.635 [2024-11-18 12:05:48.379217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.635 [2024-11-18 12:05:48.379239] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.635 [2024-11-18 12:05:48.392601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.635 [2024-11-18 12:05:48.393061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.635 [2024-11-18 12:05:48.393103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.635 [2024-11-18 12:05:48.393130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.635 [2024-11-18 12:05:48.393416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.635 [2024-11-18 12:05:48.393717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.635 [2024-11-18 12:05:48.393750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.635 [2024-11-18 12:05:48.393773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.635 [2024-11-18 12:05:48.393794] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.635 [2024-11-18 12:05:48.407113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.635 [2024-11-18 12:05:48.407588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.635 [2024-11-18 12:05:48.407630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.635 [2024-11-18 12:05:48.407658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.635 [2024-11-18 12:05:48.407944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.635 [2024-11-18 12:05:48.408232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.635 [2024-11-18 12:05:48.408263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.635 [2024-11-18 12:05:48.408287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.635 [2024-11-18 12:05:48.408309] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.635 [2024-11-18 12:05:48.421626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.635 [2024-11-18 12:05:48.422074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.635 [2024-11-18 12:05:48.422121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.635 [2024-11-18 12:05:48.422149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.635 [2024-11-18 12:05:48.422435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.635 [2024-11-18 12:05:48.422733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.635 [2024-11-18 12:05:48.422765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.635 [2024-11-18 12:05:48.422789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.635 [2024-11-18 12:05:48.422811] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.635 [2024-11-18 12:05:48.436132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.635 [2024-11-18 12:05:48.436572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.635 [2024-11-18 12:05:48.436614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.635 [2024-11-18 12:05:48.436641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.635 [2024-11-18 12:05:48.436927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.635 [2024-11-18 12:05:48.437216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.635 [2024-11-18 12:05:48.437249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.635 [2024-11-18 12:05:48.437272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.635 [2024-11-18 12:05:48.437295] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.635 [2024-11-18 12:05:48.450640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.635 [2024-11-18 12:05:48.451114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.635 [2024-11-18 12:05:48.451156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.635 [2024-11-18 12:05:48.451183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.635 [2024-11-18 12:05:48.451469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.635 [2024-11-18 12:05:48.451769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.635 [2024-11-18 12:05:48.451803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.635 [2024-11-18 12:05:48.451827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.635 [2024-11-18 12:05:48.451850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.635 [2024-11-18 12:05:48.465185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.635 [2024-11-18 12:05:48.465651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.635 [2024-11-18 12:05:48.465693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.635 [2024-11-18 12:05:48.465719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.635 [2024-11-18 12:05:48.466011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.635 [2024-11-18 12:05:48.466300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.635 [2024-11-18 12:05:48.466333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.635 [2024-11-18 12:05:48.466356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.635 [2024-11-18 12:05:48.466378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.635 [2024-11-18 12:05:48.479741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.635 [2024-11-18 12:05:48.480203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.635 [2024-11-18 12:05:48.480246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.635 [2024-11-18 12:05:48.480273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.635 [2024-11-18 12:05:48.480573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.635 [2024-11-18 12:05:48.480859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.635 [2024-11-18 12:05:48.480891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.635 [2024-11-18 12:05:48.480915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.635 [2024-11-18 12:05:48.480937] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.635 [2024-11-18 12:05:48.494229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.635 [2024-11-18 12:05:48.494667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.635 [2024-11-18 12:05:48.494710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.635 [2024-11-18 12:05:48.494737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.635 [2024-11-18 12:05:48.495022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.635 [2024-11-18 12:05:48.495310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.635 [2024-11-18 12:05:48.495343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.635 [2024-11-18 12:05:48.495365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.635 [2024-11-18 12:05:48.495388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.635 [2024-11-18 12:05:48.508719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.635 [2024-11-18 12:05:48.509170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.635 [2024-11-18 12:05:48.509212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.635 [2024-11-18 12:05:48.509239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.635 [2024-11-18 12:05:48.509538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.635 [2024-11-18 12:05:48.509827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.635 [2024-11-18 12:05:48.509865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.636 [2024-11-18 12:05:48.509889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.636 [2024-11-18 12:05:48.509912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.895 [2024-11-18 12:05:48.523251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.895 [2024-11-18 12:05:48.523715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.895 [2024-11-18 12:05:48.523759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.895 [2024-11-18 12:05:48.523785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.895 [2024-11-18 12:05:48.524072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.895 [2024-11-18 12:05:48.524361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.895 [2024-11-18 12:05:48.524403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.895 [2024-11-18 12:05:48.524425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.895 [2024-11-18 12:05:48.524447] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.895 [2024-11-18 12:05:48.537845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.895 [2024-11-18 12:05:48.538308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.895 [2024-11-18 12:05:48.538349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.895 [2024-11-18 12:05:48.538376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.895 [2024-11-18 12:05:48.538672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.895 [2024-11-18 12:05:48.538960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.895 [2024-11-18 12:05:48.538992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.895 [2024-11-18 12:05:48.539015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.895 [2024-11-18 12:05:48.539037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.895 [2024-11-18 12:05:48.552345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.895 [2024-11-18 12:05:48.552765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.895 [2024-11-18 12:05:48.552806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.895 [2024-11-18 12:05:48.552833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.895 [2024-11-18 12:05:48.553118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.895 [2024-11-18 12:05:48.553405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.895 [2024-11-18 12:05:48.553438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.895 [2024-11-18 12:05:48.553462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.895 [2024-11-18 12:05:48.553501] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.895 [2024-11-18 12:05:48.566835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.895 [2024-11-18 12:05:48.567304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.895 [2024-11-18 12:05:48.567346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.895 [2024-11-18 12:05:48.567372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.895 [2024-11-18 12:05:48.567673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.895 [2024-11-18 12:05:48.567960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.895 [2024-11-18 12:05:48.567993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.895 [2024-11-18 12:05:48.568016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.895 [2024-11-18 12:05:48.568039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.895 [2024-11-18 12:05:48.581380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.895 [2024-11-18 12:05:48.581843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.895 [2024-11-18 12:05:48.581887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.895 [2024-11-18 12:05:48.581914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.895 [2024-11-18 12:05:48.582200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.895 [2024-11-18 12:05:48.582501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.895 [2024-11-18 12:05:48.582535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.895 [2024-11-18 12:05:48.582559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.895 [2024-11-18 12:05:48.582581] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.895 [2024-11-18 12:05:48.595894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.895 [2024-11-18 12:05:48.596332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.895 [2024-11-18 12:05:48.596374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.895 [2024-11-18 12:05:48.596401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.895 [2024-11-18 12:05:48.596703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.895 [2024-11-18 12:05:48.596989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.895 [2024-11-18 12:05:48.597022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.895 [2024-11-18 12:05:48.597045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.895 [2024-11-18 12:05:48.597068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.895 [2024-11-18 12:05:48.610367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.895 [2024-11-18 12:05:48.610810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.895 [2024-11-18 12:05:48.610852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.895 [2024-11-18 12:05:48.610879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.895 [2024-11-18 12:05:48.611165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.895 [2024-11-18 12:05:48.611452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.895 [2024-11-18 12:05:48.611485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.895 [2024-11-18 12:05:48.611522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.895 [2024-11-18 12:05:48.611546] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.895 [2024-11-18 12:05:48.624868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.895 [2024-11-18 12:05:48.625327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.896 [2024-11-18 12:05:48.625368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.896 [2024-11-18 12:05:48.625394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.896 [2024-11-18 12:05:48.625691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.896 [2024-11-18 12:05:48.625978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.896 [2024-11-18 12:05:48.626011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.896 [2024-11-18 12:05:48.626036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.896 [2024-11-18 12:05:48.626060] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.896 [2024-11-18 12:05:48.639364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.896 [2024-11-18 12:05:48.639839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.896 [2024-11-18 12:05:48.639881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.896 [2024-11-18 12:05:48.639908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.896 [2024-11-18 12:05:48.640193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.896 [2024-11-18 12:05:48.640480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.896 [2024-11-18 12:05:48.640524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.896 [2024-11-18 12:05:48.640548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.896 [2024-11-18 12:05:48.640571] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.896 [2024-11-18 12:05:48.653869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.896 [2024-11-18 12:05:48.654316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.896 [2024-11-18 12:05:48.654358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.896 [2024-11-18 12:05:48.654390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.896 [2024-11-18 12:05:48.654692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.896 [2024-11-18 12:05:48.654980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.896 [2024-11-18 12:05:48.655014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.896 [2024-11-18 12:05:48.655037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.896 [2024-11-18 12:05:48.655060] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.896 [2024-11-18 12:05:48.668368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.896 [2024-11-18 12:05:48.668823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.896 [2024-11-18 12:05:48.668865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.896 [2024-11-18 12:05:48.668891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.896 [2024-11-18 12:05:48.669177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.896 [2024-11-18 12:05:48.669464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.896 [2024-11-18 12:05:48.669510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.896 [2024-11-18 12:05:48.669536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.896 [2024-11-18 12:05:48.669560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.896 [2024-11-18 12:05:48.682888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.896 [2024-11-18 12:05:48.683353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.896 [2024-11-18 12:05:48.683395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.896 [2024-11-18 12:05:48.683422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.896 [2024-11-18 12:05:48.683722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.896 [2024-11-18 12:05:48.684009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.896 [2024-11-18 12:05:48.684042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.896 [2024-11-18 12:05:48.684065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.896 [2024-11-18 12:05:48.684088] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.896 [2024-11-18 12:05:48.697386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.896 [2024-11-18 12:05:48.697859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.896 [2024-11-18 12:05:48.697901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.896 [2024-11-18 12:05:48.697928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.896 [2024-11-18 12:05:48.698214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.896 [2024-11-18 12:05:48.698524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.896 [2024-11-18 12:05:48.698558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.896 [2024-11-18 12:05:48.698582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.896 [2024-11-18 12:05:48.698605] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.896 [2024-11-18 12:05:48.711900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.896 [2024-11-18 12:05:48.712344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.896 [2024-11-18 12:05:48.712388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.896 [2024-11-18 12:05:48.712414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.896 [2024-11-18 12:05:48.712723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.896 [2024-11-18 12:05:48.713011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.896 [2024-11-18 12:05:48.713044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.896 [2024-11-18 12:05:48.713066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.896 [2024-11-18 12:05:48.713089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.896 [2024-11-18 12:05:48.726396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.896 [2024-11-18 12:05:48.726882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.896 [2024-11-18 12:05:48.726925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.896 [2024-11-18 12:05:48.726951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.896 [2024-11-18 12:05:48.727237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.896 [2024-11-18 12:05:48.727537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.896 [2024-11-18 12:05:48.727570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.896 [2024-11-18 12:05:48.727593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.896 [2024-11-18 12:05:48.727616] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.896 [2024-11-18 12:05:48.740907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.896 [2024-11-18 12:05:48.741357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.896 [2024-11-18 12:05:48.741399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.896 [2024-11-18 12:05:48.741426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.896 [2024-11-18 12:05:48.741727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.896 [2024-11-18 12:05:48.742015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.896 [2024-11-18 12:05:48.742054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.896 [2024-11-18 12:05:48.742078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.896 [2024-11-18 12:05:48.742101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.896 [2024-11-18 12:05:48.755415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.896 [2024-11-18 12:05:48.755889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.896 [2024-11-18 12:05:48.755931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.896 [2024-11-18 12:05:48.755958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.896 [2024-11-18 12:05:48.756243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.896 [2024-11-18 12:05:48.756545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.896 [2024-11-18 12:05:48.756578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.896 [2024-11-18 12:05:48.756601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.897 [2024-11-18 12:05:48.756624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.897 [2024-11-18 12:05:48.769928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.897 [2024-11-18 12:05:48.770375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.897 [2024-11-18 12:05:48.770431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.897 [2024-11-18 12:05:48.770458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.897 [2024-11-18 12:05:48.770758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.897 [2024-11-18 12:05:48.771045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.897 [2024-11-18 12:05:48.771077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.897 [2024-11-18 12:05:48.771099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.897 [2024-11-18 12:05:48.771121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.156 [2024-11-18 12:05:48.784466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.156 [2024-11-18 12:05:48.784904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.156 [2024-11-18 12:05:48.784947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.156 [2024-11-18 12:05:48.784974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.156 [2024-11-18 12:05:48.785261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.156 [2024-11-18 12:05:48.785571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.156 [2024-11-18 12:05:48.785604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.156 [2024-11-18 12:05:48.785629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.156 [2024-11-18 12:05:48.785658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.156 [2024-11-18 12:05:48.798956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.156 [2024-11-18 12:05:48.799392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.156 [2024-11-18 12:05:48.799437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.156 [2024-11-18 12:05:48.799464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.156 [2024-11-18 12:05:48.799763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.156 [2024-11-18 12:05:48.800050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.156 [2024-11-18 12:05:48.800083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.156 [2024-11-18 12:05:48.800105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.156 [2024-11-18 12:05:48.800127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.156 [2024-11-18 12:05:48.813415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.156 [2024-11-18 12:05:48.813886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.156 [2024-11-18 12:05:48.813928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.156 [2024-11-18 12:05:48.813955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.156 [2024-11-18 12:05:48.814241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.156 [2024-11-18 12:05:48.814544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.156 [2024-11-18 12:05:48.814578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.156 [2024-11-18 12:05:48.814602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.156 [2024-11-18 12:05:48.814624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.156 [2024-11-18 12:05:48.827925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.156 [2024-11-18 12:05:48.828386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.156 [2024-11-18 12:05:48.828429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.156 [2024-11-18 12:05:48.828457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.156 [2024-11-18 12:05:48.828757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.156 [2024-11-18 12:05:48.829044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.156 [2024-11-18 12:05:48.829078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.156 [2024-11-18 12:05:48.829101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.156 [2024-11-18 12:05:48.829125] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.156 [2024-11-18 12:05:48.842391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.156 [2024-11-18 12:05:48.842860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.156 [2024-11-18 12:05:48.842902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.156 [2024-11-18 12:05:48.842928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.156 [2024-11-18 12:05:48.843214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.156 [2024-11-18 12:05:48.843516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.156 [2024-11-18 12:05:48.843558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.156 [2024-11-18 12:05:48.843581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.156 [2024-11-18 12:05:48.843604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.156 [2024-11-18 12:05:48.857037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.156 [2024-11-18 12:05:48.857467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.156 [2024-11-18 12:05:48.857521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.156 [2024-11-18 12:05:48.857551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.156 [2024-11-18 12:05:48.857837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.156 [2024-11-18 12:05:48.858126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.156 [2024-11-18 12:05:48.858158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.156 [2024-11-18 12:05:48.858181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.156 [2024-11-18 12:05:48.858204] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.156 [2024-11-18 12:05:48.871502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.156 [2024-11-18 12:05:48.871971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.156 [2024-11-18 12:05:48.872013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.156 [2024-11-18 12:05:48.872040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.156 [2024-11-18 12:05:48.872326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.156 [2024-11-18 12:05:48.872627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.156 [2024-11-18 12:05:48.872660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.156 [2024-11-18 12:05:48.872683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.156 [2024-11-18 12:05:48.872706] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.156 [2024-11-18 12:05:48.886022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.156 [2024-11-18 12:05:48.886449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.156 [2024-11-18 12:05:48.886505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.156 [2024-11-18 12:05:48.886540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.156 [2024-11-18 12:05:48.886828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.156 [2024-11-18 12:05:48.887117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.156 [2024-11-18 12:05:48.887149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.156 [2024-11-18 12:05:48.887172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.156 [2024-11-18 12:05:48.887195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.156 [2024-11-18 12:05:48.900505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.157 [2024-11-18 12:05:48.900957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.157 [2024-11-18 12:05:48.900998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.157 [2024-11-18 12:05:48.901025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.157 [2024-11-18 12:05:48.901311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.157 [2024-11-18 12:05:48.901613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.157 [2024-11-18 12:05:48.901647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.157 [2024-11-18 12:05:48.901670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.157 [2024-11-18 12:05:48.901693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.157 [2024-11-18 12:05:48.914957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.157 [2024-11-18 12:05:48.915420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.157 [2024-11-18 12:05:48.915461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.157 [2024-11-18 12:05:48.915487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.157 [2024-11-18 12:05:48.915785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.157 [2024-11-18 12:05:48.916073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.157 [2024-11-18 12:05:48.916104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.157 [2024-11-18 12:05:48.916127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.157 [2024-11-18 12:05:48.916149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.157 [2024-11-18 12:05:48.929441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.157 [2024-11-18 12:05:48.929883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.157 [2024-11-18 12:05:48.929926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.157 [2024-11-18 12:05:48.929953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.157 [2024-11-18 12:05:48.930238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.157 [2024-11-18 12:05:48.930547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.157 [2024-11-18 12:05:48.930581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.157 [2024-11-18 12:05:48.930606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.157 [2024-11-18 12:05:48.930629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.157 [2024-11-18 12:05:48.943892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.157 [2024-11-18 12:05:48.944420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.157 [2024-11-18 12:05:48.944462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.157 [2024-11-18 12:05:48.944498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.157 [2024-11-18 12:05:48.944786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.157 [2024-11-18 12:05:48.945074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.157 [2024-11-18 12:05:48.945106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.157 [2024-11-18 12:05:48.945129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.157 [2024-11-18 12:05:48.945152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.157 [2024-11-18 12:05:48.958453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.157 [2024-11-18 12:05:48.958996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.157 [2024-11-18 12:05:48.959055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.157 [2024-11-18 12:05:48.959081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.157 [2024-11-18 12:05:48.959365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.157 [2024-11-18 12:05:48.959667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.157 [2024-11-18 12:05:48.959701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.157 [2024-11-18 12:05:48.959724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.157 [2024-11-18 12:05:48.959746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.157 [2024-11-18 12:05:48.973034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.157 [2024-11-18 12:05:48.973482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.157 [2024-11-18 12:05:48.973532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.157 [2024-11-18 12:05:48.973559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.157 [2024-11-18 12:05:48.973859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.157 [2024-11-18 12:05:48.974148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.157 [2024-11-18 12:05:48.974180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.157 [2024-11-18 12:05:48.974208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.157 [2024-11-18 12:05:48.974231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.157 [2024-11-18 12:05:48.987628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.157 [2024-11-18 12:05:48.988166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.157 [2024-11-18 12:05:48.988226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.157 [2024-11-18 12:05:48.988254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.157 [2024-11-18 12:05:48.988552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.157 [2024-11-18 12:05:48.988846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.157 [2024-11-18 12:05:48.988878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.157 [2024-11-18 12:05:48.988901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.157 [2024-11-18 12:05:48.988923] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.157 [2024-11-18 12:05:49.002269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.157 [2024-11-18 12:05:49.002687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.157 [2024-11-18 12:05:49.002730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.157 [2024-11-18 12:05:49.002758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.157 [2024-11-18 12:05:49.003050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.157 [2024-11-18 12:05:49.003339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.157 [2024-11-18 12:05:49.003372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.157 [2024-11-18 12:05:49.003395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.157 [2024-11-18 12:05:49.003419] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.157 [2024-11-18 12:05:49.016845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.157 [2024-11-18 12:05:49.017314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.157 [2024-11-18 12:05:49.017373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.157 [2024-11-18 12:05:49.017400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.157 [2024-11-18 12:05:49.017700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.157 [2024-11-18 12:05:49.017990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.157 [2024-11-18 12:05:49.018023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.157 [2024-11-18 12:05:49.018046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.157 [2024-11-18 12:05:49.018070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.157 [2024-11-18 12:05:49.031334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.157 [2024-11-18 12:05:49.031830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.157 [2024-11-18 12:05:49.031873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.157 [2024-11-18 12:05:49.031901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.157 [2024-11-18 12:05:49.032190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.157 [2024-11-18 12:05:49.032508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.157 [2024-11-18 12:05:49.032541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.157 [2024-11-18 12:05:49.032564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.158 [2024-11-18 12:05:49.032587] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.425 [2024-11-18 12:05:49.045922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.425 [2024-11-18 12:05:49.046363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.425 [2024-11-18 12:05:49.046405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.425 [2024-11-18 12:05:49.046431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.425 [2024-11-18 12:05:49.046730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.425 [2024-11-18 12:05:49.047019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.425 [2024-11-18 12:05:49.047051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.426 [2024-11-18 12:05:49.047074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.426 [2024-11-18 12:05:49.047095] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.426 [2024-11-18 12:05:49.060554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.426 [2024-11-18 12:05:49.061018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.426 [2024-11-18 12:05:49.061060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.426 [2024-11-18 12:05:49.061086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.426 [2024-11-18 12:05:49.061372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.426 [2024-11-18 12:05:49.061672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.426 [2024-11-18 12:05:49.061705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.426 [2024-11-18 12:05:49.061728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.426 [2024-11-18 12:05:49.061758] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.426 [2024-11-18 12:05:49.075199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.426 [2024-11-18 12:05:49.075673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.426 [2024-11-18 12:05:49.075720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.426 [2024-11-18 12:05:49.075748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.426 [2024-11-18 12:05:49.076035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.426 [2024-11-18 12:05:49.076323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.426 [2024-11-18 12:05:49.076356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.426 [2024-11-18 12:05:49.076379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.426 [2024-11-18 12:05:49.076402] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.426 3378.00 IOPS, 13.20 MiB/s [2024-11-18T11:05:49.311Z] [2024-11-18 12:05:49.089749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.426 [2024-11-18 12:05:49.090203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.426 [2024-11-18 12:05:49.090248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.426 [2024-11-18 12:05:49.090276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.427 [2024-11-18 12:05:49.090576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.427 [2024-11-18 12:05:49.090866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.427 [2024-11-18 12:05:49.090897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.427 [2024-11-18 12:05:49.090921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.427 [2024-11-18 12:05:49.090943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.427 [2024-11-18 12:05:49.104364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.427 [2024-11-18 12:05:49.104844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.427 [2024-11-18 12:05:49.104886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.427 [2024-11-18 12:05:49.104913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.427 [2024-11-18 12:05:49.105200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.427 [2024-11-18 12:05:49.105500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.427 [2024-11-18 12:05:49.105534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.428 [2024-11-18 12:05:49.105557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.428 [2024-11-18 12:05:49.105579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.428 [2024-11-18 12:05:49.119003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.429 [2024-11-18 12:05:49.119430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.429 [2024-11-18 12:05:49.119472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.429 [2024-11-18 12:05:49.119513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.429 [2024-11-18 12:05:49.119808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.429 [2024-11-18 12:05:49.120097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.429 [2024-11-18 12:05:49.120128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.429 [2024-11-18 12:05:49.120151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.429 [2024-11-18 12:05:49.120173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.429 [2024-11-18 12:05:49.133577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.429 [2024-11-18 12:05:49.133987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.429 [2024-11-18 12:05:49.134029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.430 [2024-11-18 12:05:49.134056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.430 [2024-11-18 12:05:49.134345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.430 [2024-11-18 12:05:49.134647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.430 [2024-11-18 12:05:49.134681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.430 [2024-11-18 12:05:49.134705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.430 [2024-11-18 12:05:49.134728] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.430 [2024-11-18 12:05:49.147945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.430 [2024-11-18 12:05:49.148418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.430 [2024-11-18 12:05:49.148460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.430 [2024-11-18 12:05:49.148487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.430 [2024-11-18 12:05:49.148792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.430 [2024-11-18 12:05:49.149085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.430 [2024-11-18 12:05:49.149116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.430 [2024-11-18 12:05:49.149139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.430 [2024-11-18 12:05:49.149161] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.430 [2024-11-18 12:05:49.162579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.430 [2024-11-18 12:05:49.163053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.430 [2024-11-18 12:05:49.163098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.430 [2024-11-18 12:05:49.163122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.434 [2024-11-18 12:05:49.163421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.434 [2024-11-18 12:05:49.163725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.434 [2024-11-18 12:05:49.163755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.435 [2024-11-18 12:05:49.163801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.435 [2024-11-18 12:05:49.163820] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.435 [2024-11-18 12:05:49.176995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.435 [2024-11-18 12:05:49.177540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.435 [2024-11-18 12:05:49.177587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.435 [2024-11-18 12:05:49.177612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.435 [2024-11-18 12:05:49.177917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.435 [2024-11-18 12:05:49.178209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.435 [2024-11-18 12:05:49.178264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.435 [2024-11-18 12:05:49.178287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.435 [2024-11-18 12:05:49.178309] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.435 [2024-11-18 12:05:49.191508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.435 [2024-11-18 12:05:49.191994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.435 [2024-11-18 12:05:49.192046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.435 [2024-11-18 12:05:49.192073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.435 [2024-11-18 12:05:49.192361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.435 [2024-11-18 12:05:49.192663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.436 [2024-11-18 12:05:49.192695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.436 [2024-11-18 12:05:49.192724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.436 [2024-11-18 12:05:49.192746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.436 [2024-11-18 12:05:49.206176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.436 [2024-11-18 12:05:49.206639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.436 [2024-11-18 12:05:49.206691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.436 [2024-11-18 12:05:49.206718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.436 [2024-11-18 12:05:49.207011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.436 [2024-11-18 12:05:49.207301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.436 [2024-11-18 12:05:49.207333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.436 [2024-11-18 12:05:49.207362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.436 [2024-11-18 12:05:49.207386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.436 [2024-11-18 12:05:49.220816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.436 [2024-11-18 12:05:49.221302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.436 [2024-11-18 12:05:49.221350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.436 [2024-11-18 12:05:49.221377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.437 [2024-11-18 12:05:49.221677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.437 [2024-11-18 12:05:49.221966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.437 [2024-11-18 12:05:49.221997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.437 [2024-11-18 12:05:49.222020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.437 [2024-11-18 12:05:49.222042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.437 [2024-11-18 12:05:49.235429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.437 [2024-11-18 12:05:49.235892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.437 [2024-11-18 12:05:49.235942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.437 [2024-11-18 12:05:49.235969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.437 [2024-11-18 12:05:49.236259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.437 [2024-11-18 12:05:49.236560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.437 [2024-11-18 12:05:49.236592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.437 [2024-11-18 12:05:49.236618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.437 [2024-11-18 12:05:49.236640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.437 [2024-11-18 12:05:49.250056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.437 [2024-11-18 12:05:49.250526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.437 [2024-11-18 12:05:49.250578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.437 [2024-11-18 12:05:49.250604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.438 [2024-11-18 12:05:49.250891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.438 [2024-11-18 12:05:49.251179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.438 [2024-11-18 12:05:49.251211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.438 [2024-11-18 12:05:49.251233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.438 [2024-11-18 12:05:49.251256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.438 [2024-11-18 12:05:49.264648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.438 [2024-11-18 12:05:49.265131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.438 [2024-11-18 12:05:49.265179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.438 [2024-11-18 12:05:49.265206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.440 [2024-11-18 12:05:49.265503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.440 [2024-11-18 12:05:49.265794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.440 [2024-11-18 12:05:49.265826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.440 [2024-11-18 12:05:49.265850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.440 [2024-11-18 12:05:49.265872] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.440 [2024-11-18 12:05:49.279256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.440 [2024-11-18 12:05:49.279746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.440 [2024-11-18 12:05:49.279796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.440 [2024-11-18 12:05:49.279822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.440 [2024-11-18 12:05:49.280108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.440 [2024-11-18 12:05:49.280397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.441 [2024-11-18 12:05:49.280429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.441 [2024-11-18 12:05:49.280452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.441 [2024-11-18 12:05:49.280474] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.441 [2024-11-18 12:05:49.293884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.441 [2024-11-18 12:05:49.294372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.441 [2024-11-18 12:05:49.294415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.441 [2024-11-18 12:05:49.294442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.441 [2024-11-18 12:05:49.294742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.441 [2024-11-18 12:05:49.295031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.441 [2024-11-18 12:05:49.295062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.441 [2024-11-18 12:05:49.295085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.441 [2024-11-18 12:05:49.295108] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.441 [2024-11-18 12:05:49.308501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.441 [2024-11-18 12:05:49.308988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.703 [2024-11-18 12:05:49.309043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.703 [2024-11-18 12:05:49.309070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.703 [2024-11-18 12:05:49.309359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.703 [2024-11-18 12:05:49.309661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.703 [2024-11-18 12:05:49.309693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.703 [2024-11-18 12:05:49.309719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.703 [2024-11-18 12:05:49.309740] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.703 [2024-11-18 12:05:49.323064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.703 [2024-11-18 12:05:49.323563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.703 [2024-11-18 12:05:49.323612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.703 [2024-11-18 12:05:49.323639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.703 [2024-11-18 12:05:49.323927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.703 [2024-11-18 12:05:49.324217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.703 [2024-11-18 12:05:49.324248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.703 [2024-11-18 12:05:49.324271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.703 [2024-11-18 12:05:49.324293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.703 [2024-11-18 12:05:49.337654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.703 [2024-11-18 12:05:49.338146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.703 [2024-11-18 12:05:49.338197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.703 [2024-11-18 12:05:49.338224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.703 [2024-11-18 12:05:49.338522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.703 [2024-11-18 12:05:49.338811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.703 [2024-11-18 12:05:49.338842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.703 [2024-11-18 12:05:49.338866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.703 [2024-11-18 12:05:49.338889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.703 [2024-11-18 12:05:49.352234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.703 [2024-11-18 12:05:49.352734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.703 [2024-11-18 12:05:49.352784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.703 [2024-11-18 12:05:49.352811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.703 [2024-11-18 12:05:49.353103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.703 [2024-11-18 12:05:49.353392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.703 [2024-11-18 12:05:49.353424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.703 [2024-11-18 12:05:49.353447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.703 [2024-11-18 12:05:49.353470] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.703 [2024-11-18 12:05:49.366831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.703 [2024-11-18 12:05:49.367288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.703 [2024-11-18 12:05:49.367338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.703 [2024-11-18 12:05:49.367364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.703 [2024-11-18 12:05:49.367663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.703 [2024-11-18 12:05:49.367951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.703 [2024-11-18 12:05:49.367983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.703 [2024-11-18 12:05:49.368006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.703 [2024-11-18 12:05:49.368027] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.703 [2024-11-18 12:05:49.381356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.704 [2024-11-18 12:05:49.381849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.704 [2024-11-18 12:05:49.381891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.704 [2024-11-18 12:05:49.381920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.704 [2024-11-18 12:05:49.382206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.704 [2024-11-18 12:05:49.382504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.704 [2024-11-18 12:05:49.382536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.704 [2024-11-18 12:05:49.382559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.704 [2024-11-18 12:05:49.382597] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.704 [2024-11-18 12:05:49.395947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.704 [2024-11-18 12:05:49.396401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.704 [2024-11-18 12:05:49.396452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.704 [2024-11-18 12:05:49.396479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.704 [2024-11-18 12:05:49.396776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.704 [2024-11-18 12:05:49.397064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.704 [2024-11-18 12:05:49.397102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.704 [2024-11-18 12:05:49.397135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.704 [2024-11-18 12:05:49.397157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.704 [2024-11-18 12:05:49.410508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.704 [2024-11-18 12:05:49.410977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.704 [2024-11-18 12:05:49.411025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.704 [2024-11-18 12:05:49.411051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.704 [2024-11-18 12:05:49.411336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.704 [2024-11-18 12:05:49.411636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.704 [2024-11-18 12:05:49.411669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.704 [2024-11-18 12:05:49.411699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.704 [2024-11-18 12:05:49.411721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.704 [2024-11-18 12:05:49.425036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.704 [2024-11-18 12:05:49.425495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.704 [2024-11-18 12:05:49.425546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.704 [2024-11-18 12:05:49.425579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.704 [2024-11-18 12:05:49.425864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.704 [2024-11-18 12:05:49.426151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.704 [2024-11-18 12:05:49.426182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.704 [2024-11-18 12:05:49.426205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.704 [2024-11-18 12:05:49.426227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.704 [2024-11-18 12:05:49.439542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.704 [2024-11-18 12:05:49.440000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.704 [2024-11-18 12:05:49.440041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.704 [2024-11-18 12:05:49.440072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.704 [2024-11-18 12:05:49.440357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.704 [2024-11-18 12:05:49.440657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.704 [2024-11-18 12:05:49.440689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.704 [2024-11-18 12:05:49.440716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.704 [2024-11-18 12:05:49.440743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.704 [2024-11-18 12:05:49.454031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.704 [2024-11-18 12:05:49.454543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.704 [2024-11-18 12:05:49.454595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.704 [2024-11-18 12:05:49.454622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.704 [2024-11-18 12:05:49.454908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.704 [2024-11-18 12:05:49.455197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.704 [2024-11-18 12:05:49.455229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.704 [2024-11-18 12:05:49.455252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.704 [2024-11-18 12:05:49.455274] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.704 [2024-11-18 12:05:49.468583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.704 [2024-11-18 12:05:49.469150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.704 [2024-11-18 12:05:49.469198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.704 [2024-11-18 12:05:49.469225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.704 [2024-11-18 12:05:49.469520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.704 [2024-11-18 12:05:49.469809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.704 [2024-11-18 12:05:49.469840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.704 [2024-11-18 12:05:49.469863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.704 [2024-11-18 12:05:49.469885] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.704 [2024-11-18 12:05:49.483191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.704 [2024-11-18 12:05:49.483658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.704 [2024-11-18 12:05:49.483708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.704 [2024-11-18 12:05:49.483735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.704 [2024-11-18 12:05:49.484019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.704 [2024-11-18 12:05:49.484307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.704 [2024-11-18 12:05:49.484339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.704 [2024-11-18 12:05:49.484363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.704 [2024-11-18 12:05:49.484385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.704 [2024-11-18 12:05:49.497689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.704 [2024-11-18 12:05:49.498178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.704 [2024-11-18 12:05:49.498229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.704 [2024-11-18 12:05:49.498256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.704 [2024-11-18 12:05:49.498554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.704 [2024-11-18 12:05:49.498842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.704 [2024-11-18 12:05:49.498874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.704 [2024-11-18 12:05:49.498896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.704 [2024-11-18 12:05:49.498919] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.704 [2024-11-18 12:05:49.512187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.704 [2024-11-18 12:05:49.512634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.704 [2024-11-18 12:05:49.512684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.704 [2024-11-18 12:05:49.512712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.704 [2024-11-18 12:05:49.512997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.704 [2024-11-18 12:05:49.513284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.704 [2024-11-18 12:05:49.513315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.705 [2024-11-18 12:05:49.513339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.705 [2024-11-18 12:05:49.513363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.705 [2024-11-18 12:05:49.526671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.705 [2024-11-18 12:05:49.527136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.705 [2024-11-18 12:05:49.527186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.705 [2024-11-18 12:05:49.527213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.705 [2024-11-18 12:05:49.527509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.705 [2024-11-18 12:05:49.527798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.705 [2024-11-18 12:05:49.527829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.705 [2024-11-18 12:05:49.527852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.705 [2024-11-18 12:05:49.527875] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.705 [2024-11-18 12:05:49.541156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.705 [2024-11-18 12:05:49.541601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.705 [2024-11-18 12:05:49.541642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.705 [2024-11-18 12:05:49.541676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.705 [2024-11-18 12:05:49.541963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.705 [2024-11-18 12:05:49.542251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.705 [2024-11-18 12:05:49.542282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.705 [2024-11-18 12:05:49.542306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.705 [2024-11-18 12:05:49.542328] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.705 [2024-11-18 12:05:49.555649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.705 [2024-11-18 12:05:49.556108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.705 [2024-11-18 12:05:49.556156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.705 [2024-11-18 12:05:49.556182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.705 [2024-11-18 12:05:49.556467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.705 [2024-11-18 12:05:49.556784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.705 [2024-11-18 12:05:49.556815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.705 [2024-11-18 12:05:49.556848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.705 [2024-11-18 12:05:49.556870] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.705 [2024-11-18 12:05:49.570153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.705 [2024-11-18 12:05:49.570606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.705 [2024-11-18 12:05:49.570657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.705 [2024-11-18 12:05:49.570684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.705 [2024-11-18 12:05:49.570969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.705 [2024-11-18 12:05:49.571256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.705 [2024-11-18 12:05:49.571288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.705 [2024-11-18 12:05:49.571311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.705 [2024-11-18 12:05:49.571334] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.705 [2024-11-18 12:05:49.584691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.705 [2024-11-18 12:05:49.585150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.705 [2024-11-18 12:05:49.585201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.705 [2024-11-18 12:05:49.585227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.705 [2024-11-18 12:05:49.585528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.705 [2024-11-18 12:05:49.585823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.705 [2024-11-18 12:05:49.585855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.705 [2024-11-18 12:05:49.585877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.705 [2024-11-18 12:05:49.585899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.964 [2024-11-18 12:05:49.599185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.964 [2024-11-18 12:05:49.599662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.964 [2024-11-18 12:05:49.599711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.964 [2024-11-18 12:05:49.599737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.964 [2024-11-18 12:05:49.600023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.964 [2024-11-18 12:05:49.600311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.964 [2024-11-18 12:05:49.600343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.964 [2024-11-18 12:05:49.600366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.964 [2024-11-18 12:05:49.600387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.964 [2024-11-18 12:05:49.613660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.964 [2024-11-18 12:05:49.614120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.964 [2024-11-18 12:05:49.614169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.964 [2024-11-18 12:05:49.614196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.964 [2024-11-18 12:05:49.614481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.964 [2024-11-18 12:05:49.614779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.964 [2024-11-18 12:05:49.614812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.964 [2024-11-18 12:05:49.614846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.964 [2024-11-18 12:05:49.614868] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.964 [2024-11-18 12:05:49.628167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.964 [2024-11-18 12:05:49.628624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.964 [2024-11-18 12:05:49.628675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.964 [2024-11-18 12:05:49.628701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.964 [2024-11-18 12:05:49.628986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.964 [2024-11-18 12:05:49.629273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.964 [2024-11-18 12:05:49.629304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.964 [2024-11-18 12:05:49.629333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.964 [2024-11-18 12:05:49.629357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.964 [2024-11-18 12:05:49.642638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.964 [2024-11-18 12:05:49.643085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.964 [2024-11-18 12:05:49.643128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.964 [2024-11-18 12:05:49.643156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.964 [2024-11-18 12:05:49.643443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.964 [2024-11-18 12:05:49.643744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.964 [2024-11-18 12:05:49.643777] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.964 [2024-11-18 12:05:49.643801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.964 [2024-11-18 12:05:49.643824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.964 [2024-11-18 12:05:49.657091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.964 [2024-11-18 12:05:49.657543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.964 [2024-11-18 12:05:49.657585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.964 [2024-11-18 12:05:49.657612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.964 [2024-11-18 12:05:49.657897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.964 [2024-11-18 12:05:49.658185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.964 [2024-11-18 12:05:49.658216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.965 [2024-11-18 12:05:49.658239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.965 [2024-11-18 12:05:49.658261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.965 [2024-11-18 12:05:49.671553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.965 [2024-11-18 12:05:49.671980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.965 [2024-11-18 12:05:49.672030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.965 [2024-11-18 12:05:49.672056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.965 [2024-11-18 12:05:49.672342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.965 [2024-11-18 12:05:49.672642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.965 [2024-11-18 12:05:49.672674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.965 [2024-11-18 12:05:49.672701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.965 [2024-11-18 12:05:49.672728] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.965 [2024-11-18 12:05:49.686051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.965 [2024-11-18 12:05:49.686525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.965 [2024-11-18 12:05:49.686577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.965 [2024-11-18 12:05:49.686603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.965 [2024-11-18 12:05:49.686889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.965 [2024-11-18 12:05:49.687176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.965 [2024-11-18 12:05:49.687207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.965 [2024-11-18 12:05:49.687230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.965 [2024-11-18 12:05:49.687252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.965 [2024-11-18 12:05:49.700545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.965 [2024-11-18 12:05:49.701011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.965 [2024-11-18 12:05:49.701062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.965 [2024-11-18 12:05:49.701089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.965 [2024-11-18 12:05:49.701373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.965 [2024-11-18 12:05:49.701673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.965 [2024-11-18 12:05:49.701705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.965 [2024-11-18 12:05:49.701732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.965 [2024-11-18 12:05:49.701753] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.965 [2024-11-18 12:05:49.715021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.965 [2024-11-18 12:05:49.715467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.965 [2024-11-18 12:05:49.715527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.965 [2024-11-18 12:05:49.715555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.965 [2024-11-18 12:05:49.715840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.965 [2024-11-18 12:05:49.716128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.965 [2024-11-18 12:05:49.716159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.965 [2024-11-18 12:05:49.716182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.965 [2024-11-18 12:05:49.716204] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.965 [2024-11-18 12:05:49.729507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.965 [2024-11-18 12:05:49.729961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.965 [2024-11-18 12:05:49.730010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.965 [2024-11-18 12:05:49.730037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.965 [2024-11-18 12:05:49.730321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.965 [2024-11-18 12:05:49.730621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.965 [2024-11-18 12:05:49.730654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.965 [2024-11-18 12:05:49.730684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.965 [2024-11-18 12:05:49.730706] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.965 [2024-11-18 12:05:49.744012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.965 [2024-11-18 12:05:49.744455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.965 [2024-11-18 12:05:49.744517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.965 [2024-11-18 12:05:49.744555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.965 [2024-11-18 12:05:49.744841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.965 [2024-11-18 12:05:49.745129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.965 [2024-11-18 12:05:49.745161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.965 [2024-11-18 12:05:49.745184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.965 [2024-11-18 12:05:49.745206] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.965 [2024-11-18 12:05:49.758514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.965 [2024-11-18 12:05:49.759002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.965 [2024-11-18 12:05:49.759053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.965 [2024-11-18 12:05:49.759080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.965 [2024-11-18 12:05:49.759366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.965 [2024-11-18 12:05:49.759666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.965 [2024-11-18 12:05:49.759698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.965 [2024-11-18 12:05:49.759725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.965 [2024-11-18 12:05:49.759747] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.965 [2024-11-18 12:05:49.773054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.965 [2024-11-18 12:05:49.773521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.965 [2024-11-18 12:05:49.773574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.965 [2024-11-18 12:05:49.773606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.965 [2024-11-18 12:05:49.773893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.965 [2024-11-18 12:05:49.774180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.965 [2024-11-18 12:05:49.774211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.965 [2024-11-18 12:05:49.774233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.965 [2024-11-18 12:05:49.774256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.965 [2024-11-18 12:05:49.787578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.965 [2024-11-18 12:05:49.788011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.965 [2024-11-18 12:05:49.788063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.965 [2024-11-18 12:05:49.788090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.965 [2024-11-18 12:05:49.788376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.965 [2024-11-18 12:05:49.788676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.965 [2024-11-18 12:05:49.788709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.965 [2024-11-18 12:05:49.788736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.965 [2024-11-18 12:05:49.788758] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.965 [2024-11-18 12:05:49.802052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.965 [2024-11-18 12:05:49.802521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.965 [2024-11-18 12:05:49.802573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.965 [2024-11-18 12:05:49.802614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.965 [2024-11-18 12:05:49.802900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.966 [2024-11-18 12:05:49.803188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.966 [2024-11-18 12:05:49.803219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.966 [2024-11-18 12:05:49.803242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.966 [2024-11-18 12:05:49.803264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.966 [2024-11-18 12:05:49.816604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.966 [2024-11-18 12:05:49.817059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.966 [2024-11-18 12:05:49.817109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.966 [2024-11-18 12:05:49.817136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.966 [2024-11-18 12:05:49.817420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.966 [2024-11-18 12:05:49.817727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.966 [2024-11-18 12:05:49.817760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.966 [2024-11-18 12:05:49.817784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.966 [2024-11-18 12:05:49.817807] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.966 [2024-11-18 12:05:49.831095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.966 [2024-11-18 12:05:49.831544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.966 [2024-11-18 12:05:49.831593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.966 [2024-11-18 12:05:49.831620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.966 [2024-11-18 12:05:49.831905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.966 [2024-11-18 12:05:49.832192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.966 [2024-11-18 12:05:49.832223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.966 [2024-11-18 12:05:49.832247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.966 [2024-11-18 12:05:49.832270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.966 [2024-11-18 12:05:49.845535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.966 [2024-11-18 12:05:49.845984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.966 [2024-11-18 12:05:49.846034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.966 [2024-11-18 12:05:49.846061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.966 [2024-11-18 12:05:49.846346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.966 [2024-11-18 12:05:49.846646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.966 [2024-11-18 12:05:49.846678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.966 [2024-11-18 12:05:49.846700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.966 [2024-11-18 12:05:49.846723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.225 [2024-11-18 12:05:49.860004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.225 [2024-11-18 12:05:49.860470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.225 [2024-11-18 12:05:49.860528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.225 [2024-11-18 12:05:49.860555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.225 [2024-11-18 12:05:49.860839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.225 [2024-11-18 12:05:49.861126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.225 [2024-11-18 12:05:49.861157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.225 [2024-11-18 12:05:49.861185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.225 [2024-11-18 12:05:49.861208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.225 [2024-11-18 12:05:49.874539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.225 [2024-11-18 12:05:49.874983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.225 [2024-11-18 12:05:49.875032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.225 [2024-11-18 12:05:49.875058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.225 [2024-11-18 12:05:49.875344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.225 [2024-11-18 12:05:49.875646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.225 [2024-11-18 12:05:49.875679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.225 [2024-11-18 12:05:49.875701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.225 [2024-11-18 12:05:49.875723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.225 [2024-11-18 12:05:49.889049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.225 [2024-11-18 12:05:49.889504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.225 [2024-11-18 12:05:49.889546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.225 [2024-11-18 12:05:49.889572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.225 [2024-11-18 12:05:49.889856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.225 [2024-11-18 12:05:49.890143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.225 [2024-11-18 12:05:49.890174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.225 [2024-11-18 12:05:49.890198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.225 [2024-11-18 12:05:49.890221] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.225 [2024-11-18 12:05:49.903512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.225 [2024-11-18 12:05:49.903933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.225 [2024-11-18 12:05:49.903975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.225 [2024-11-18 12:05:49.904010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.225 [2024-11-18 12:05:49.904294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.225 [2024-11-18 12:05:49.904593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.225 [2024-11-18 12:05:49.904634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.225 [2024-11-18 12:05:49.904657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.225 [2024-11-18 12:05:49.904680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.225 [2024-11-18 12:05:49.917962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.225 [2024-11-18 12:05:49.918429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.225 [2024-11-18 12:05:49.918478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.225 [2024-11-18 12:05:49.918516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.225 [2024-11-18 12:05:49.918803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.225 [2024-11-18 12:05:49.919090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.225 [2024-11-18 12:05:49.919121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.225 [2024-11-18 12:05:49.919144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.225 [2024-11-18 12:05:49.919166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.226 [2024-11-18 12:05:49.932445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.226 [2024-11-18 12:05:49.932917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.226 [2024-11-18 12:05:49.932959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.226 [2024-11-18 12:05:49.932986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.226 [2024-11-18 12:05:49.933270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.226 [2024-11-18 12:05:49.933571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.226 [2024-11-18 12:05:49.933603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.226 [2024-11-18 12:05:49.933625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.226 [2024-11-18 12:05:49.933647] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.226 [2024-11-18 12:05:49.946934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.226 [2024-11-18 12:05:49.947376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.226 [2024-11-18 12:05:49.947417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.226 [2024-11-18 12:05:49.947444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.226 [2024-11-18 12:05:49.947741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.226 [2024-11-18 12:05:49.948028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.226 [2024-11-18 12:05:49.948060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.226 [2024-11-18 12:05:49.948084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.226 [2024-11-18 12:05:49.948106] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.226 [2024-11-18 12:05:49.961366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.226 [2024-11-18 12:05:49.961852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.226 [2024-11-18 12:05:49.961898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.226 [2024-11-18 12:05:49.961925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.226 [2024-11-18 12:05:49.962210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.226 [2024-11-18 12:05:49.962507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.226 [2024-11-18 12:05:49.962539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.226 [2024-11-18 12:05:49.962563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.226 [2024-11-18 12:05:49.962585] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.226 [2024-11-18 12:05:49.975849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.226 [2024-11-18 12:05:49.976382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.226 [2024-11-18 12:05:49.976431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.226 [2024-11-18 12:05:49.976458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.226 [2024-11-18 12:05:49.976754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.226 [2024-11-18 12:05:49.977040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.226 [2024-11-18 12:05:49.977072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.226 [2024-11-18 12:05:49.977094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.226 [2024-11-18 12:05:49.977117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.226 [2024-11-18 12:05:49.990417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.226 [2024-11-18 12:05:49.990866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.226 [2024-11-18 12:05:49.990917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.226 [2024-11-18 12:05:49.990944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.226 [2024-11-18 12:05:49.991230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.226 [2024-11-18 12:05:49.991532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.226 [2024-11-18 12:05:49.991563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.226 [2024-11-18 12:05:49.991585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.226 [2024-11-18 12:05:49.991607] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.226 [2024-11-18 12:05:50.004911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.226 [2024-11-18 12:05:50.005360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.226 [2024-11-18 12:05:50.005409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.226 [2024-11-18 12:05:50.005437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.226 [2024-11-18 12:05:50.005754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.226 [2024-11-18 12:05:50.006066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.226 [2024-11-18 12:05:50.006099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.226 [2024-11-18 12:05:50.006123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.226 [2024-11-18 12:05:50.006145] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.226 [2024-11-18 12:05:50.019441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.226 [2024-11-18 12:05:50.019941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.226 [2024-11-18 12:05:50.019994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.226 [2024-11-18 12:05:50.020022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.226 [2024-11-18 12:05:50.020310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.226 [2024-11-18 12:05:50.020612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.226 [2024-11-18 12:05:50.020645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.226 [2024-11-18 12:05:50.020669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.226 [2024-11-18 12:05:50.020692] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.226 [2024-11-18 12:05:50.034088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.226 [2024-11-18 12:05:50.034554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.226 [2024-11-18 12:05:50.034606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.226 [2024-11-18 12:05:50.034632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.226 [2024-11-18 12:05:50.034931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.226 [2024-11-18 12:05:50.035229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.226 [2024-11-18 12:05:50.035261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.226 [2024-11-18 12:05:50.035284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.226 [2024-11-18 12:05:50.035306] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.226 [2024-11-18 12:05:50.048806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.226 [2024-11-18 12:05:50.049333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.226 [2024-11-18 12:05:50.049386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.226 [2024-11-18 12:05:50.049415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.226 [2024-11-18 12:05:50.049717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.226 [2024-11-18 12:05:50.050008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.226 [2024-11-18 12:05:50.050046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.226 [2024-11-18 12:05:50.050080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.226 [2024-11-18 12:05:50.050103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.226 [2024-11-18 12:05:50.063621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.226 [2024-11-18 12:05:50.064117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.226 [2024-11-18 12:05:50.064167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.226 [2024-11-18 12:05:50.064195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.226 [2024-11-18 12:05:50.064511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.226 [2024-11-18 12:05:50.064810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.226 [2024-11-18 12:05:50.064842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.227 [2024-11-18 12:05:50.064868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.227 [2024-11-18 12:05:50.064891] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.227 [2024-11-18 12:05:50.078205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.227 [2024-11-18 12:05:50.078752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.227 [2024-11-18 12:05:50.078805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.227 [2024-11-18 12:05:50.078832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.227 [2024-11-18 12:05:50.079152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.227 [2024-11-18 12:05:50.079447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.227 [2024-11-18 12:05:50.079480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.227 [2024-11-18 12:05:50.079519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.227 [2024-11-18 12:05:50.079545] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.227 2702.40 IOPS, 10.56 MiB/s [2024-11-18T11:05:50.112Z] [2024-11-18 12:05:50.092927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.227 [2024-11-18 12:05:50.093401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.227 [2024-11-18 12:05:50.093451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.227 [2024-11-18 12:05:50.093478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.227 [2024-11-18 12:05:50.093780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.227 [2024-11-18 12:05:50.094071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.227 [2024-11-18 12:05:50.094102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.227 [2024-11-18 12:05:50.094132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.227 [2024-11-18 12:05:50.094156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.227 [2024-11-18 12:05:50.107636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.227 [2024-11-18 12:05:50.108074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.227 [2024-11-18 12:05:50.108125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.227 [2024-11-18 12:05:50.108154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.227 [2024-11-18 12:05:50.108455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.227 [2024-11-18 12:05:50.108759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.227 [2024-11-18 12:05:50.108793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.227 [2024-11-18 12:05:50.108816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.227 [2024-11-18 12:05:50.108839] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.486 [2024-11-18 12:05:50.122202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.486 [2024-11-18 12:05:50.122709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.486 [2024-11-18 12:05:50.122761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.486 [2024-11-18 12:05:50.122788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.486 [2024-11-18 12:05:50.123075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.486 [2024-11-18 12:05:50.123363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.486 [2024-11-18 12:05:50.123403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.486 [2024-11-18 12:05:50.123450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.486 [2024-11-18 12:05:50.123476] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.486 [2024-11-18 12:05:50.136955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.486 [2024-11-18 12:05:50.137416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.486 [2024-11-18 12:05:50.137470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.486 [2024-11-18 12:05:50.137508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.486 [2024-11-18 12:05:50.137827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.486 [2024-11-18 12:05:50.138119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.486 [2024-11-18 12:05:50.138151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.486 [2024-11-18 12:05:50.138174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.486 [2024-11-18 12:05:50.138197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.486 [2024-11-18 12:05:50.151722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.486 [2024-11-18 12:05:50.152246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.486 [2024-11-18 12:05:50.152297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.486 [2024-11-18 12:05:50.152325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.486 [2024-11-18 12:05:50.152628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.486 [2024-11-18 12:05:50.152932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.486 [2024-11-18 12:05:50.152967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.486 [2024-11-18 12:05:50.152998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.486 [2024-11-18 12:05:50.153021] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.486 [2024-11-18 12:05:50.166502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.486 [2024-11-18 12:05:50.167672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.486 [2024-11-18 12:05:50.167721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.486 [2024-11-18 12:05:50.167751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.486 [2024-11-18 12:05:50.168051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.486 [2024-11-18 12:05:50.168370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.486 [2024-11-18 12:05:50.168405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.486 [2024-11-18 12:05:50.168429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.487 [2024-11-18 12:05:50.168453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.487 [2024-11-18 12:05:50.181228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.487 [2024-11-18 12:05:50.181699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.487 [2024-11-18 12:05:50.181745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.487 [2024-11-18 12:05:50.181773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.487 [2024-11-18 12:05:50.182067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.487 [2024-11-18 12:05:50.182371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.487 [2024-11-18 12:05:50.182405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.487 [2024-11-18 12:05:50.182431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.487 [2024-11-18 12:05:50.182453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.487 [2024-11-18 12:05:50.196033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.487 [2024-11-18 12:05:50.196534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.487 [2024-11-18 12:05:50.196579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.487 [2024-11-18 12:05:50.196613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.487 [2024-11-18 12:05:50.196903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.487 [2024-11-18 12:05:50.197194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.487 [2024-11-18 12:05:50.197225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.487 [2024-11-18 12:05:50.197248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.487 [2024-11-18 12:05:50.197270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.487 [2024-11-18 12:05:50.210842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.487 [2024-11-18 12:05:50.211350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.487 [2024-11-18 12:05:50.211420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.487 [2024-11-18 12:05:50.211447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.487 [2024-11-18 12:05:50.211789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.487 [2024-11-18 12:05:50.212078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.487 [2024-11-18 12:05:50.212110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.487 [2024-11-18 12:05:50.212150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.487 [2024-11-18 12:05:50.212174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.487 [2024-11-18 12:05:50.225381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.487 [2024-11-18 12:05:50.225876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.487 [2024-11-18 12:05:50.225929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.487 [2024-11-18 12:05:50.225957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.487 [2024-11-18 12:05:50.226246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.487 [2024-11-18 12:05:50.226548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.487 [2024-11-18 12:05:50.226580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.487 [2024-11-18 12:05:50.226606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.487 [2024-11-18 12:05:50.226629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.487 [2024-11-18 12:05:50.240127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.487 [2024-11-18 12:05:50.240564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.487 [2024-11-18 12:05:50.240616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.487 [2024-11-18 12:05:50.240644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.487 [2024-11-18 12:05:50.240940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.487 [2024-11-18 12:05:50.241253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.487 [2024-11-18 12:05:50.241296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.487 [2024-11-18 12:05:50.241319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.487 [2024-11-18 12:05:50.241352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.487 [2024-11-18 12:05:50.254735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.487 [2024-11-18 12:05:50.255207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.487 [2024-11-18 12:05:50.255268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.487 [2024-11-18 12:05:50.255300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.487 [2024-11-18 12:05:50.255600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.487 [2024-11-18 12:05:50.255891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.487 [2024-11-18 12:05:50.255923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.487 [2024-11-18 12:05:50.255946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.487 [2024-11-18 12:05:50.255968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.487 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3137020 Killed "${NVMF_APP[@]}" "$@" 00:37:24.487 12:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:37:24.487 12:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:24.487 12:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:24.487 [2024-11-18 12:05:50.269232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.487 12:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:24.487 12:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:24.487 [2024-11-18 12:05:50.269713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.487 [2024-11-18 12:05:50.269764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.487 [2024-11-18 12:05:50.269791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.487 [2024-11-18 12:05:50.270080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.487 [2024-11-18 12:05:50.270371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.487 [2024-11-18 12:05:50.270402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.487 [2024-11-18 12:05:50.270429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.487 [2024-11-18 12:05:50.270451] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.487 12:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3138234 00:37:24.487 12:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:24.487 12:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3138234 00:37:24.487 12:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3138234 ']' 00:37:24.487 12:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:24.487 12:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:24.487 12:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:24.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:24.487 12:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:24.487 12:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:24.487 [2024-11-18 12:05:50.283952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.487 [2024-11-18 12:05:50.284416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.487 [2024-11-18 12:05:50.284457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.487 [2024-11-18 12:05:50.284484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.487 [2024-11-18 12:05:50.284801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.487 [2024-11-18 12:05:50.285101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.487 [2024-11-18 12:05:50.285133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.487 [2024-11-18 12:05:50.285157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.487 [2024-11-18 12:05:50.285179] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.487 [2024-11-18 12:05:50.298521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.487 [2024-11-18 12:05:50.298985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.487 [2024-11-18 12:05:50.299028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.488 [2024-11-18 12:05:50.299056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.488 [2024-11-18 12:05:50.299345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.488 [2024-11-18 12:05:50.299647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.488 [2024-11-18 12:05:50.299679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.488 [2024-11-18 12:05:50.299702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.488 [2024-11-18 12:05:50.299725] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.488 [2024-11-18 12:05:50.313192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.488 [2024-11-18 12:05:50.313894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.488 [2024-11-18 12:05:50.313941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.488 [2024-11-18 12:05:50.313968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.488 [2024-11-18 12:05:50.314281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.488 [2024-11-18 12:05:50.314568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.488 [2024-11-18 12:05:50.314596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.488 [2024-11-18 12:05:50.314617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.488 [2024-11-18 12:05:50.314637] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.488 [2024-11-18 12:05:50.327337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.488 [2024-11-18 12:05:50.327945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.488 [2024-11-18 12:05:50.327991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.488 [2024-11-18 12:05:50.328031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.488 [2024-11-18 12:05:50.328335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.488 [2024-11-18 12:05:50.328635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.488 [2024-11-18 12:05:50.328664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.488 [2024-11-18 12:05:50.328685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.488 [2024-11-18 12:05:50.328706] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.488 [2024-11-18 12:05:50.341701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.488 [2024-11-18 12:05:50.342180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.488 [2024-11-18 12:05:50.342219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.488 [2024-11-18 12:05:50.342243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.488 [2024-11-18 12:05:50.342560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.488 [2024-11-18 12:05:50.342822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.488 [2024-11-18 12:05:50.342848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.488 [2024-11-18 12:05:50.342867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.488 [2024-11-18 12:05:50.342885] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.488 [2024-11-18 12:05:50.355776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.488 [2024-11-18 12:05:50.356223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.488 [2024-11-18 12:05:50.356263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.488 [2024-11-18 12:05:50.356288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.488 [2024-11-18 12:05:50.356594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.488 [2024-11-18 12:05:50.356857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.488 [2024-11-18 12:05:50.356883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.488 [2024-11-18 12:05:50.356912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.488 [2024-11-18 12:05:50.356931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.488 [2024-11-18 12:05:50.366389] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:37:24.488 [2024-11-18 12:05:50.366520] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:24.488 [2024-11-18 12:05:50.370130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.488 [2024-11-18 12:05:50.370625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.488 [2024-11-18 12:05:50.370666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.488 [2024-11-18 12:05:50.370691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.488 [2024-11-18 12:05:50.371008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.747 [2024-11-18 12:05:50.371317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.747 [2024-11-18 12:05:50.371345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.747 [2024-11-18 12:05:50.371365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.747 [2024-11-18 12:05:50.371384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.747 [2024-11-18 12:05:50.384155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.747 [2024-11-18 12:05:50.384625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.747 [2024-11-18 12:05:50.384665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.747 [2024-11-18 12:05:50.384690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.747 [2024-11-18 12:05:50.384981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.747 [2024-11-18 12:05:50.385234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.747 [2024-11-18 12:05:50.385262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.747 [2024-11-18 12:05:50.385281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.747 [2024-11-18 12:05:50.385298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.747 [2024-11-18 12:05:50.398244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.747 [2024-11-18 12:05:50.398780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.747 [2024-11-18 12:05:50.398821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.747 [2024-11-18 12:05:50.398847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.748 [2024-11-18 12:05:50.399151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.748 [2024-11-18 12:05:50.399414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.748 [2024-11-18 12:05:50.399451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.748 [2024-11-18 12:05:50.399472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.748 [2024-11-18 12:05:50.399515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.748 [2024-11-18 12:05:50.412357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.748 [2024-11-18 12:05:50.412814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.748 [2024-11-18 12:05:50.412867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.748 [2024-11-18 12:05:50.412906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.748 [2024-11-18 12:05:50.413196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.748 [2024-11-18 12:05:50.413457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.748 [2024-11-18 12:05:50.413512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.748 [2024-11-18 12:05:50.413536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.748 [2024-11-18 12:05:50.413557] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.748 [2024-11-18 12:05:50.426499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.748 [2024-11-18 12:05:50.426990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.748 [2024-11-18 12:05:50.427034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.748 [2024-11-18 12:05:50.427059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.748 [2024-11-18 12:05:50.427355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.748 [2024-11-18 12:05:50.427653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.748 [2024-11-18 12:05:50.427683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.748 [2024-11-18 12:05:50.427704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.748 [2024-11-18 12:05:50.427724] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.748 [2024-11-18 12:05:50.440669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.748 [2024-11-18 12:05:50.441123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.748 [2024-11-18 12:05:50.441162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.748 [2024-11-18 12:05:50.441187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.748 [2024-11-18 12:05:50.441501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.748 [2024-11-18 12:05:50.441749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.748 [2024-11-18 12:05:50.441783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.748 [2024-11-18 12:05:50.441818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.748 [2024-11-18 12:05:50.441841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.748 [2024-11-18 12:05:50.454576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.748 [2024-11-18 12:05:50.455040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.748 [2024-11-18 12:05:50.455080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.748 [2024-11-18 12:05:50.455104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.748 [2024-11-18 12:05:50.455388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.748 [2024-11-18 12:05:50.455637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.748 [2024-11-18 12:05:50.455663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.748 [2024-11-18 12:05:50.455682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.748 [2024-11-18 12:05:50.455700] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.748 [2024-11-18 12:05:50.469256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.748 [2024-11-18 12:05:50.469714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.748 [2024-11-18 12:05:50.469753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.748 [2024-11-18 12:05:50.469778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.748 [2024-11-18 12:05:50.470072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.748 [2024-11-18 12:05:50.470383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.748 [2024-11-18 12:05:50.470415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.748 [2024-11-18 12:05:50.470438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.748 [2024-11-18 12:05:50.470472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.748 [2024-11-18 12:05:50.483720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.748 [2024-11-18 12:05:50.484200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.748 [2024-11-18 12:05:50.484265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.748 [2024-11-18 12:05:50.484295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.748 [2024-11-18 12:05:50.484597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.748 [2024-11-18 12:05:50.484874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.748 [2024-11-18 12:05:50.484905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.748 [2024-11-18 12:05:50.484928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.748 [2024-11-18 12:05:50.484950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.748 [2024-11-18 12:05:50.498191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.748 [2024-11-18 12:05:50.498700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.748 [2024-11-18 12:05:50.498739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.748 [2024-11-18 12:05:50.498764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.748 [2024-11-18 12:05:50.499055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.748 [2024-11-18 12:05:50.499349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.748 [2024-11-18 12:05:50.499380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.748 [2024-11-18 12:05:50.499404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.748 [2024-11-18 12:05:50.499426] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.748 [2024-11-18 12:05:50.512773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.748 [2024-11-18 12:05:50.513245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.748 [2024-11-18 12:05:50.513286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.748 [2024-11-18 12:05:50.513313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.748 [2024-11-18 12:05:50.513617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.748 [2024-11-18 12:05:50.513903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.748 [2024-11-18 12:05:50.513935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.748 [2024-11-18 12:05:50.513957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.748 [2024-11-18 12:05:50.513979] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.748 [2024-11-18 12:05:50.527268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.748 [2024-11-18 12:05:50.527749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.748 [2024-11-18 12:05:50.527787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.748 [2024-11-18 12:05:50.527810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.748 [2024-11-18 12:05:50.528112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.748 [2024-11-18 12:05:50.528402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.748 [2024-11-18 12:05:50.528433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.748 [2024-11-18 12:05:50.528455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.748 [2024-11-18 12:05:50.528476] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.748 [2024-11-18 12:05:50.532045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:24.748 [2024-11-18 12:05:50.541960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.748 [2024-11-18 12:05:50.542502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.749 [2024-11-18 12:05:50.542571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.749 [2024-11-18 12:05:50.542596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.749 [2024-11-18 12:05:50.542872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.749 [2024-11-18 12:05:50.543179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.749 [2024-11-18 12:05:50.543211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.749 [2024-11-18 12:05:50.543234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.749 [2024-11-18 12:05:50.543255] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.749 [2024-11-18 12:05:50.556877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.749 [2024-11-18 12:05:50.557568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.749 [2024-11-18 12:05:50.557617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.749 [2024-11-18 12:05:50.557646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.749 [2024-11-18 12:05:50.557989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.749 [2024-11-18 12:05:50.558291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.749 [2024-11-18 12:05:50.558324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.749 [2024-11-18 12:05:50.558351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.749 [2024-11-18 12:05:50.558377] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.749 [2024-11-18 12:05:50.571605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.749 [2024-11-18 12:05:50.572085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.749 [2024-11-18 12:05:50.572127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.749 [2024-11-18 12:05:50.572154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.749 [2024-11-18 12:05:50.572447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.749 [2024-11-18 12:05:50.572751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.749 [2024-11-18 12:05:50.572795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.749 [2024-11-18 12:05:50.572824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.749 [2024-11-18 12:05:50.572860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.749 [2024-11-18 12:05:50.586131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.749 [2024-11-18 12:05:50.586608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.749 [2024-11-18 12:05:50.586645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.749 [2024-11-18 12:05:50.586669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.749 [2024-11-18 12:05:50.586988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.749 [2024-11-18 12:05:50.587281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.749 [2024-11-18 12:05:50.587312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.749 [2024-11-18 12:05:50.587335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.749 [2024-11-18 12:05:50.587357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.749 [2024-11-18 12:05:50.600559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.749 [2024-11-18 12:05:50.601022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.749 [2024-11-18 12:05:50.601064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.749 [2024-11-18 12:05:50.601090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.749 [2024-11-18 12:05:50.601379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.749 [2024-11-18 12:05:50.601673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.749 [2024-11-18 12:05:50.601701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.749 [2024-11-18 12:05:50.601721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.749 [2024-11-18 12:05:50.601740] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.749 [2024-11-18 12:05:50.615023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.749 [2024-11-18 12:05:50.615505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.749 [2024-11-18 12:05:50.615560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.749 [2024-11-18 12:05:50.615584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.749 [2024-11-18 12:05:50.615892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.749 [2024-11-18 12:05:50.616184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.749 [2024-11-18 12:05:50.616215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.749 [2024-11-18 12:05:50.616238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.749 [2024-11-18 12:05:50.616260] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.749 [2024-11-18 12:05:50.629548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.749 [2024-11-18 12:05:50.630014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.749 [2024-11-18 12:05:50.630068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.749 [2024-11-18 12:05:50.630092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.749 [2024-11-18 12:05:50.630367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.749 [2024-11-18 12:05:50.630656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.749 [2024-11-18 12:05:50.630692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.749 [2024-11-18 12:05:50.630714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.749 [2024-11-18 12:05:50.630734] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.009 [2024-11-18 12:05:50.644167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.009 [2024-11-18 12:05:50.644654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.009 [2024-11-18 12:05:50.644692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.009 [2024-11-18 12:05:50.644716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.009 [2024-11-18 12:05:50.645017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.009 [2024-11-18 12:05:50.645308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.009 [2024-11-18 12:05:50.645339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.009 [2024-11-18 12:05:50.645362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.009 [2024-11-18 12:05:50.645384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.009 [2024-11-18 12:05:50.658697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.009 [2024-11-18 12:05:50.659194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.009 [2024-11-18 12:05:50.659236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.009 [2024-11-18 12:05:50.659263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.009 [2024-11-18 12:05:50.659576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.009 [2024-11-18 12:05:50.659845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.009 [2024-11-18 12:05:50.659876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.009 [2024-11-18 12:05:50.659898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.009 [2024-11-18 12:05:50.659921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.009 [2024-11-18 12:05:50.673357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.009 [2024-11-18 12:05:50.673752] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:25.009 [2024-11-18 12:05:50.673813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.009 [2024-11-18 12:05:50.673834] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:25.009 [2024-11-18 12:05:50.673863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.009 [2024-11-18 12:05:50.673876] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:25.009 [2024-11-18 12:05:50.673892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.009 [2024-11-18 12:05:50.673901] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:25.009 [2024-11-18 12:05:50.673929] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:25.009 [2024-11-18 12:05:50.674183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.009 [2024-11-18 12:05:50.674476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.009 [2024-11-18 12:05:50.674518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.009 [2024-11-18 12:05:50.674554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.009 [2024-11-18 12:05:50.674573] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.009 [2024-11-18 12:05:50.676481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:25.009 [2024-11-18 12:05:50.676583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:25.009 [2024-11-18 12:05:50.676585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:25.009 [2024-11-18 12:05:50.687666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.009 [2024-11-18 12:05:50.688329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.009 [2024-11-18 12:05:50.688379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.009 [2024-11-18 12:05:50.688408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.009 [2024-11-18 12:05:50.688696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.009 [2024-11-18 12:05:50.688980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.009 [2024-11-18 12:05:50.689010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.010 [2024-11-18 12:05:50.689035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.010 [2024-11-18 12:05:50.689058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.010 [2024-11-18 12:05:50.701868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.010 [2024-11-18 12:05:50.702395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.010 [2024-11-18 12:05:50.702439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.010 [2024-11-18 12:05:50.702466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.010 [2024-11-18 12:05:50.702755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.010 [2024-11-18 12:05:50.703031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.010 [2024-11-18 12:05:50.703060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.010 [2024-11-18 12:05:50.703082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.010 [2024-11-18 12:05:50.703104] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.010 [2024-11-18 12:05:50.716165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.010 [2024-11-18 12:05:50.716583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.010 [2024-11-18 12:05:50.716621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.010 [2024-11-18 12:05:50.716645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.010 [2024-11-18 12:05:50.716929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.010 [2024-11-18 12:05:50.717192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.010 [2024-11-18 12:05:50.717218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.010 [2024-11-18 12:05:50.717238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.010 [2024-11-18 12:05:50.717257] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.010 [2024-11-18 12:05:50.730324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.010 [2024-11-18 12:05:50.730769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.010 [2024-11-18 12:05:50.730806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.010 [2024-11-18 12:05:50.730830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.010 [2024-11-18 12:05:50.731107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.010 [2024-11-18 12:05:50.731365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.010 [2024-11-18 12:05:50.731392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.010 [2024-11-18 12:05:50.731413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.010 [2024-11-18 12:05:50.731432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.010 [2024-11-18 12:05:50.744602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.010 [2024-11-18 12:05:50.745048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.010 [2024-11-18 12:05:50.745085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.010 [2024-11-18 12:05:50.745108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.010 [2024-11-18 12:05:50.745390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.010 [2024-11-18 12:05:50.745681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.010 [2024-11-18 12:05:50.745711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.010 [2024-11-18 12:05:50.745732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.010 [2024-11-18 12:05:50.745752] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.010 [2024-11-18 12:05:50.758838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.010 [2024-11-18 12:05:50.759500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.010 [2024-11-18 12:05:50.759551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.010 [2024-11-18 12:05:50.759580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.010 [2024-11-18 12:05:50.759873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.010 [2024-11-18 12:05:50.760146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.010 [2024-11-18 12:05:50.760175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.010 [2024-11-18 12:05:50.760200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.010 [2024-11-18 12:05:50.760224] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.010 [2024-11-18 12:05:50.773042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.010 [2024-11-18 12:05:50.773699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.010 [2024-11-18 12:05:50.773749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.010 [2024-11-18 12:05:50.773779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.010 [2024-11-18 12:05:50.774067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.010 [2024-11-18 12:05:50.774329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.010 [2024-11-18 12:05:50.774358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.010 [2024-11-18 12:05:50.774382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.010 [2024-11-18 12:05:50.774405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.010 [2024-11-18 12:05:50.787278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.010 [2024-11-18 12:05:50.787856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.010 [2024-11-18 12:05:50.787903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.010 [2024-11-18 12:05:50.787931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.010 [2024-11-18 12:05:50.788218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.010 [2024-11-18 12:05:50.788503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.010 [2024-11-18 12:05:50.788534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.010 [2024-11-18 12:05:50.788558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.010 [2024-11-18 12:05:50.788581] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.010 [2024-11-18 12:05:50.801673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.010 [2024-11-18 12:05:50.802113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.010 [2024-11-18 12:05:50.802151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.010 [2024-11-18 12:05:50.802175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.010 [2024-11-18 12:05:50.802455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.010 [2024-11-18 12:05:50.802751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.010 [2024-11-18 12:05:50.802780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.010 [2024-11-18 12:05:50.802807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.010 [2024-11-18 12:05:50.802843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.010 [2024-11-18 12:05:50.815751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.010 [2024-11-18 12:05:50.816157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.010 [2024-11-18 12:05:50.816194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.010 [2024-11-18 12:05:50.816218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.010 [2024-11-18 12:05:50.816524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.010 [2024-11-18 12:05:50.816790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.010 [2024-11-18 12:05:50.816834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.010 [2024-11-18 12:05:50.816854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.010 [2024-11-18 12:05:50.816874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.010 [2024-11-18 12:05:50.829763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.010 [2024-11-18 12:05:50.830199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.010 [2024-11-18 12:05:50.830236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.010 [2024-11-18 12:05:50.830260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.010 [2024-11-18 12:05:50.830750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.011 [2024-11-18 12:05:50.831029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.011 [2024-11-18 12:05:50.831057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.011 [2024-11-18 12:05:50.831077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.011 [2024-11-18 12:05:50.831096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.011 [2024-11-18 12:05:50.843740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.011 [2024-11-18 12:05:50.844193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.011 [2024-11-18 12:05:50.844229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.011 [2024-11-18 12:05:50.844253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.011 [2024-11-18 12:05:50.844552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.011 [2024-11-18 12:05:50.844813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.011 [2024-11-18 12:05:50.844856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.011 [2024-11-18 12:05:50.844876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.011 [2024-11-18 12:05:50.844896] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.011 [2024-11-18 12:05:50.857743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.011 [2024-11-18 12:05:50.858197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.011 [2024-11-18 12:05:50.858235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.011 [2024-11-18 12:05:50.858259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.011 [2024-11-18 12:05:50.858560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.011 [2024-11-18 12:05:50.858822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.011 [2024-11-18 12:05:50.858865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.011 [2024-11-18 12:05:50.858885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.011 [2024-11-18 12:05:50.858904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.011 [2024-11-18 12:05:50.871905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.011 [2024-11-18 12:05:50.872325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.011 [2024-11-18 12:05:50.872361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.011 [2024-11-18 12:05:50.872384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.011 [2024-11-18 12:05:50.872656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.011 [2024-11-18 12:05:50.872929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.011 [2024-11-18 12:05:50.872956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.011 [2024-11-18 12:05:50.872976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.011 [2024-11-18 12:05:50.872995] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.011 [2024-11-18 12:05:50.885963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.011 [2024-11-18 12:05:50.886405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.011 [2024-11-18 12:05:50.886442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.011 [2024-11-18 12:05:50.886465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.011 [2024-11-18 12:05:50.886733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.011 [2024-11-18 12:05:50.887003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.011 [2024-11-18 12:05:50.887031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.011 [2024-11-18 12:05:50.887051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.011 [2024-11-18 12:05:50.887070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.270 [2024-11-18 12:05:50.900134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.271 [2024-11-18 12:05:50.900732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.271 [2024-11-18 12:05:50.900790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.271 [2024-11-18 12:05:50.900819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.271 [2024-11-18 12:05:50.901108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.271 [2024-11-18 12:05:50.901370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.271 [2024-11-18 12:05:50.901398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.271 [2024-11-18 12:05:50.901422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.271 [2024-11-18 12:05:50.901444] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.271 [2024-11-18 12:05:50.914372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.271 [2024-11-18 12:05:50.915089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.271 [2024-11-18 12:05:50.915140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.271 [2024-11-18 12:05:50.915169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.271 [2024-11-18 12:05:50.915458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.271 [2024-11-18 12:05:50.915752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.271 [2024-11-18 12:05:50.915784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.271 [2024-11-18 12:05:50.915824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.271 [2024-11-18 12:05:50.915849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.271 [2024-11-18 12:05:50.928588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.271 [2024-11-18 12:05:50.929068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.271 [2024-11-18 12:05:50.929109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.271 [2024-11-18 12:05:50.929135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.271 [2024-11-18 12:05:50.929400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.271 [2024-11-18 12:05:50.929680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.271 [2024-11-18 12:05:50.929710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.271 [2024-11-18 12:05:50.929731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.271 [2024-11-18 12:05:50.929752] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.271 [2024-11-18 12:05:50.942958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.271 [2024-11-18 12:05:50.943380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.271 [2024-11-18 12:05:50.943416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.271 [2024-11-18 12:05:50.943440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.271 [2024-11-18 12:05:50.943720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.271 [2024-11-18 12:05:50.944001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.271 [2024-11-18 12:05:50.944029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.271 [2024-11-18 12:05:50.944050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.271 [2024-11-18 12:05:50.944069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.271 [2024-11-18 12:05:50.957009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.271 [2024-11-18 12:05:50.957457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.271 [2024-11-18 12:05:50.957503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.271 [2024-11-18 12:05:50.957530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.271 [2024-11-18 12:05:50.957811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.271 [2024-11-18 12:05:50.958069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.271 [2024-11-18 12:05:50.958096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.271 [2024-11-18 12:05:50.958116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.271 [2024-11-18 12:05:50.958136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.271 [2024-11-18 12:05:50.971246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.271 [2024-11-18 12:05:50.971676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.271 [2024-11-18 12:05:50.971713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.271 [2024-11-18 12:05:50.971737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.271 [2024-11-18 12:05:50.972019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.271 [2024-11-18 12:05:50.972276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.271 [2024-11-18 12:05:50.972302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.271 [2024-11-18 12:05:50.972321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.271 [2024-11-18 12:05:50.972340] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.271 [2024-11-18 12:05:50.985205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.271 [2024-11-18 12:05:50.985657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.271 [2024-11-18 12:05:50.985694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.271 [2024-11-18 12:05:50.985718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.271 [2024-11-18 12:05:50.985989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.271 [2024-11-18 12:05:50.986242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.271 [2024-11-18 12:05:50.986275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.271 [2024-11-18 12:05:50.986295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.271 [2024-11-18 12:05:50.986315] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.271 [2024-11-18 12:05:50.999220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.271 [2024-11-18 12:05:50.999679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.271 [2024-11-18 12:05:50.999718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.271 [2024-11-18 12:05:50.999741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.271 [2024-11-18 12:05:51.000017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.271 [2024-11-18 12:05:51.000269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.271 [2024-11-18 12:05:51.000297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.271 [2024-11-18 12:05:51.000317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.271 [2024-11-18 12:05:51.000336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.271 [2024-11-18 12:05:51.013447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.271 [2024-11-18 12:05:51.013973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.271 [2024-11-18 12:05:51.014014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.271 [2024-11-18 12:05:51.014040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.271 [2024-11-18 12:05:51.014319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.271 [2024-11-18 12:05:51.014607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.271 [2024-11-18 12:05:51.014637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.271 [2024-11-18 12:05:51.014660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.271 [2024-11-18 12:05:51.014681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.271 [2024-11-18 12:05:51.027681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.271 [2024-11-18 12:05:51.028109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.271 [2024-11-18 12:05:51.028146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.271 [2024-11-18 12:05:51.028171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.271 [2024-11-18 12:05:51.028447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.271 [2024-11-18 12:05:51.028746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.272 [2024-11-18 12:05:51.028787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.272 [2024-11-18 12:05:51.028808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.272 [2024-11-18 12:05:51.028834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.272 [2024-11-18 12:05:51.041916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.272 [2024-11-18 12:05:51.042319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.272 [2024-11-18 12:05:51.042357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.272 [2024-11-18 12:05:51.042381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.272 [2024-11-18 12:05:51.042654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.272 [2024-11-18 12:05:51.042931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.272 [2024-11-18 12:05:51.042958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.272 [2024-11-18 12:05:51.042978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.272 [2024-11-18 12:05:51.042997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.272 [2024-11-18 12:05:51.056046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.272 [2024-11-18 12:05:51.056464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.272 [2024-11-18 12:05:51.056509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.272 [2024-11-18 12:05:51.056535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.272 [2024-11-18 12:05:51.056794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.272 [2024-11-18 12:05:51.057063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.272 [2024-11-18 12:05:51.057090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.272 [2024-11-18 12:05:51.057110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.272 [2024-11-18 12:05:51.057129] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.272 [2024-11-18 12:05:51.070077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.272 [2024-11-18 12:05:51.070512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.272 [2024-11-18 12:05:51.070550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.272 [2024-11-18 12:05:51.070574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.272 [2024-11-18 12:05:51.070848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.272 [2024-11-18 12:05:51.071100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.272 [2024-11-18 12:05:51.071128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.272 [2024-11-18 12:05:51.071147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.272 [2024-11-18 12:05:51.071166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.272 [2024-11-18 12:05:51.084216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.272 [2024-11-18 12:05:51.084624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.272 [2024-11-18 12:05:51.084661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.272 [2024-11-18 12:05:51.084684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.272 [2024-11-18 12:05:51.084956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.272 [2024-11-18 12:05:51.085207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.272 [2024-11-18 12:05:51.085233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.272 [2024-11-18 12:05:51.085253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.272 [2024-11-18 12:05:51.085271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.272 2252.00 IOPS, 8.80 MiB/s [2024-11-18T11:05:51.157Z] [2024-11-18 12:05:51.098315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.272 [2024-11-18 12:05:51.098756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.272 [2024-11-18 12:05:51.098793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.272 [2024-11-18 12:05:51.098818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.272 [2024-11-18 12:05:51.099089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.272 [2024-11-18 12:05:51.099341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.272 [2024-11-18 12:05:51.099367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.272 [2024-11-18 12:05:51.099387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.272 [2024-11-18 12:05:51.099405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.272 [2024-11-18 12:05:51.112282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.272 [2024-11-18 12:05:51.112738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.272 [2024-11-18 12:05:51.112775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.272 [2024-11-18 12:05:51.112799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.272 [2024-11-18 12:05:51.113071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.272 [2024-11-18 12:05:51.113321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.272 [2024-11-18 12:05:51.113349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.272 [2024-11-18 12:05:51.113369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.272 [2024-11-18 12:05:51.113388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.272 [2024-11-18 12:05:51.126181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.272 [2024-11-18 12:05:51.126590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.272 [2024-11-18 12:05:51.126627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.272 [2024-11-18 12:05:51.126668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.272 [2024-11-18 12:05:51.126940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.272 [2024-11-18 12:05:51.127192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.272 [2024-11-18 12:05:51.127219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.272 [2024-11-18 12:05:51.127238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.272 [2024-11-18 12:05:51.127256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.272 [2024-11-18 12:05:51.140167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.272 [2024-11-18 12:05:51.140574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.272 [2024-11-18 12:05:51.140611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.272 [2024-11-18 12:05:51.140635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.272 [2024-11-18 12:05:51.140906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.272 [2024-11-18 12:05:51.141157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.272 [2024-11-18 12:05:51.141184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.272 [2024-11-18 12:05:51.141203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.272 [2024-11-18 12:05:51.141222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.272 [2024-11-18 12:05:51.154254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.272 [2024-11-18 12:05:51.154704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.272 [2024-11-18 12:05:51.154744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.272 [2024-11-18 12:05:51.154770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.272 [2024-11-18 12:05:51.155043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.272 [2024-11-18 12:05:51.155319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.272 [2024-11-18 12:05:51.155346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.272 [2024-11-18 12:05:51.155367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.272 [2024-11-18 12:05:51.155388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.532 [2024-11-18 12:05:51.168238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.532 [2024-11-18 12:05:51.168668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.532 [2024-11-18 12:05:51.168706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.532 [2024-11-18 12:05:51.168730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.532 [2024-11-18 12:05:51.169012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.532 [2024-11-18 12:05:51.169265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.532 [2024-11-18 12:05:51.169292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.532 [2024-11-18 12:05:51.169311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.532 [2024-11-18 12:05:51.169330] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.532 [2024-11-18 12:05:51.182296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.532 [2024-11-18 12:05:51.182723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.532 [2024-11-18 12:05:51.182761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.532 [2024-11-18 12:05:51.182785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.532 [2024-11-18 12:05:51.183055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.532 [2024-11-18 12:05:51.183328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.532 [2024-11-18 12:05:51.183356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.532 [2024-11-18 12:05:51.183377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.532 [2024-11-18 12:05:51.183396] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.532 [2024-11-18 12:05:51.196285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.532 [2024-11-18 12:05:51.196722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.532 [2024-11-18 12:05:51.196761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.532 [2024-11-18 12:05:51.196785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.532 [2024-11-18 12:05:51.197059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.532 [2024-11-18 12:05:51.197313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.532 [2024-11-18 12:05:51.197340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.532 [2024-11-18 12:05:51.197359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.532 [2024-11-18 12:05:51.197379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.532 [2024-11-18 12:05:51.210321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.532 [2024-11-18 12:05:51.210751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.532 [2024-11-18 12:05:51.210789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.532 [2024-11-18 12:05:51.210813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.532 [2024-11-18 12:05:51.211085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.532 [2024-11-18 12:05:51.211336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.532 [2024-11-18 12:05:51.211369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.532 [2024-11-18 12:05:51.211390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.532 [2024-11-18 12:05:51.211409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.532 [2024-11-18 12:05:51.224351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.532 [2024-11-18 12:05:51.224796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.532 [2024-11-18 12:05:51.224834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.532 [2024-11-18 12:05:51.224858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.532 [2024-11-18 12:05:51.225130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.532 [2024-11-18 12:05:51.225381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.532 [2024-11-18 12:05:51.225409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.533 [2024-11-18 12:05:51.225428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.533 [2024-11-18 12:05:51.225461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.533 [2024-11-18 12:05:51.238527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.533 [2024-11-18 12:05:51.238966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.533 [2024-11-18 12:05:51.239003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.533 [2024-11-18 12:05:51.239026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.533 [2024-11-18 12:05:51.239297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.533 [2024-11-18 12:05:51.239580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.533 [2024-11-18 12:05:51.239609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.533 [2024-11-18 12:05:51.239629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.533 [2024-11-18 12:05:51.239648] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.533 [2024-11-18 12:05:51.252528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.533 [2024-11-18 12:05:51.252925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.533 [2024-11-18 12:05:51.252962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.533 [2024-11-18 12:05:51.252986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.533 [2024-11-18 12:05:51.253257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.533 [2024-11-18 12:05:51.253535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.533 [2024-11-18 12:05:51.253564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.533 [2024-11-18 12:05:51.253585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.533 [2024-11-18 12:05:51.253609] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.533 [2024-11-18 12:05:51.266641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.533 [2024-11-18 12:05:51.267131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.533 [2024-11-18 12:05:51.267168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.533 [2024-11-18 12:05:51.267192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.533 [2024-11-18 12:05:51.267463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.533 [2024-11-18 12:05:51.267748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.533 [2024-11-18 12:05:51.267778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.533 [2024-11-18 12:05:51.267799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.533 [2024-11-18 12:05:51.267833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.533 [2024-11-18 12:05:51.280943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.533 [2024-11-18 12:05:51.281362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.533 [2024-11-18 12:05:51.281399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.533 [2024-11-18 12:05:51.281422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.533 [2024-11-18 12:05:51.281691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.533 [2024-11-18 12:05:51.281960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.533 [2024-11-18 12:05:51.281988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.533 [2024-11-18 12:05:51.282008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.533 [2024-11-18 12:05:51.282027] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.533 [2024-11-18 12:05:51.295068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.533 [2024-11-18 12:05:51.295520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.533 [2024-11-18 12:05:51.295559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.533 [2024-11-18 12:05:51.295584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.533 [2024-11-18 12:05:51.295856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.533 [2024-11-18 12:05:51.296107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.533 [2024-11-18 12:05:51.296134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.533 [2024-11-18 12:05:51.296154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.533 [2024-11-18 12:05:51.296173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.533 [2024-11-18 12:05:51.309166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.533 [2024-11-18 12:05:51.309594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.533 [2024-11-18 12:05:51.309633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.533 [2024-11-18 12:05:51.309656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.533 [2024-11-18 12:05:51.309927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.533 [2024-11-18 12:05:51.310177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.533 [2024-11-18 12:05:51.310204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.533 [2024-11-18 12:05:51.310224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.533 [2024-11-18 12:05:51.310243] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.533 [2024-11-18 12:05:51.323310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.533 [2024-11-18 12:05:51.323723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.533 [2024-11-18 12:05:51.323760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.533 [2024-11-18 12:05:51.323784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.533 [2024-11-18 12:05:51.324063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.533 [2024-11-18 12:05:51.324314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.533 [2024-11-18 12:05:51.324340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.533 [2024-11-18 12:05:51.324360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.533 [2024-11-18 12:05:51.324379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.533 [2024-11-18 12:05:51.337391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.533 [2024-11-18 12:05:51.337801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.533 [2024-11-18 12:05:51.337851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.533 [2024-11-18 12:05:51.337884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.533 [2024-11-18 12:05:51.338142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.533 [2024-11-18 12:05:51.338404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.533 [2024-11-18 12:05:51.338441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.533 [2024-11-18 12:05:51.338461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.533 [2024-11-18 12:05:51.338502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.533 [2024-11-18 12:05:51.351859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.533 [2024-11-18 12:05:51.352288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.533 [2024-11-18 12:05:51.352336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.533 [2024-11-18 12:05:51.352366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.533 [2024-11-18 12:05:51.352647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.533 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:25.533 [2024-11-18 12:05:51.352924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.533 [2024-11-18 12:05:51.352953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.533 [2024-11-18 12:05:51.352974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.533 [2024-11-18 12:05:51.352994] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.533 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:37:25.533 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:25.533 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:25.533 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:25.533 [2024-11-18 12:05:51.366081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.534 [2024-11-18 12:05:51.366519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.534 [2024-11-18 12:05:51.366557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.534 [2024-11-18 12:05:51.366582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.534 [2024-11-18 12:05:51.366861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.534 [2024-11-18 12:05:51.367130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.534 [2024-11-18 12:05:51.367157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.534 [2024-11-18 12:05:51.367176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.534 [2024-11-18 12:05:51.367196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.534 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:25.534 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:25.534 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.534 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:25.534 [2024-11-18 12:05:51.375498] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:25.534 [2024-11-18 12:05:51.380282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.534 [2024-11-18 12:05:51.380726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.534 [2024-11-18 12:05:51.380764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.534 [2024-11-18 12:05:51.380799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.534 [2024-11-18 12:05:51.381074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.534 [2024-11-18 12:05:51.381329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.534 [2024-11-18 12:05:51.381357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.534 [2024-11-18 12:05:51.381382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.534 [2024-11-18 12:05:51.381402] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.534 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.534 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:25.534 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.534 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:25.534 [2024-11-18 12:05:51.394527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.534 [2024-11-18 12:05:51.395018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.534 [2024-11-18 12:05:51.395058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.534 [2024-11-18 12:05:51.395083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.534 [2024-11-18 12:05:51.395372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.534 [2024-11-18 12:05:51.395658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.534 [2024-11-18 12:05:51.395688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.534 [2024-11-18 12:05:51.395709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.534 [2024-11-18 12:05:51.395729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.534 [2024-11-18 12:05:51.408792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.534 [2024-11-18 12:05:51.409398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.534 [2024-11-18 12:05:51.409447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.534 [2024-11-18 12:05:51.409486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.534 [2024-11-18 12:05:51.409783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.534 [2024-11-18 12:05:51.410062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.534 [2024-11-18 12:05:51.410091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.534 [2024-11-18 12:05:51.410115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.534 [2024-11-18 12:05:51.410137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.793 [2024-11-18 12:05:51.423055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.793 [2024-11-18 12:05:51.423533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.793 [2024-11-18 12:05:51.423573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.793 [2024-11-18 12:05:51.423599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.793 [2024-11-18 12:05:51.423879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.793 [2024-11-18 12:05:51.424136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.793 [2024-11-18 12:05:51.424169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.793 [2024-11-18 12:05:51.424190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.793 [2024-11-18 12:05:51.424211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.793 [2024-11-18 12:05:51.437257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.793 [2024-11-18 12:05:51.437707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.793 [2024-11-18 12:05:51.437745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.793 [2024-11-18 12:05:51.437769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.793 [2024-11-18 12:05:51.438051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.793 [2024-11-18 12:05:51.438337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.793 [2024-11-18 12:05:51.438366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.793 [2024-11-18 12:05:51.438387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.793 [2024-11-18 12:05:51.438407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.793 [2024-11-18 12:05:51.451387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.793 [2024-11-18 12:05:51.451824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.793 [2024-11-18 12:05:51.451862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.793 [2024-11-18 12:05:51.451886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.793 [2024-11-18 12:05:51.452163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.793 [2024-11-18 12:05:51.452417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.793 [2024-11-18 12:05:51.452445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.793 [2024-11-18 12:05:51.452464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.793 [2024-11-18 12:05:51.452511] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.793 [2024-11-18 12:05:51.465595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.793 [2024-11-18 12:05:51.466027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.793 [2024-11-18 12:05:51.466064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.793 [2024-11-18 12:05:51.466088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.793 [2024-11-18 12:05:51.466361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.793 [2024-11-18 12:05:51.466655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.793 [2024-11-18 12:05:51.466684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.793 [2024-11-18 12:05:51.466704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.793 [2024-11-18 12:05:51.466729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.793 Malloc0 00:37:25.793 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.793 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:25.793 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.793 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:25.793 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.793 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:25.793 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.793 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:25.793 [2024-11-18 12:05:51.479885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.793 [2024-11-18 12:05:51.480335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.793 [2024-11-18 12:05:51.480372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.793 [2024-11-18 12:05:51.480396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.793 [2024-11-18 12:05:51.480665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.793 [2024-11-18 12:05:51.480938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.793 [2024-11-18 12:05:51.480966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.793 [2024-11-18 12:05:51.480985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.793 [2024-11-18 12:05:51.481005] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.793 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.793 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:25.793 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.793 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:25.793 [2024-11-18 12:05:51.487551] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:25.793 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.793 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3137454 00:37:25.794 [2024-11-18 12:05:51.494194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.794 [2024-11-18 12:05:51.609676] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:37:27.293 2345.71 IOPS, 9.16 MiB/s [2024-11-18T11:05:54.112Z] 2816.25 IOPS, 11.00 MiB/s [2024-11-18T11:05:55.487Z] 3199.00 IOPS, 12.50 MiB/s [2024-11-18T11:05:56.421Z] 3503.90 IOPS, 13.69 MiB/s [2024-11-18T11:05:57.355Z] 3750.45 IOPS, 14.65 MiB/s [2024-11-18T11:05:58.291Z] 3962.67 IOPS, 15.48 MiB/s [2024-11-18T11:05:59.225Z] 4134.92 IOPS, 16.15 MiB/s [2024-11-18T11:06:00.158Z] 4281.93 IOPS, 16.73 MiB/s [2024-11-18T11:06:00.158Z] 4401.53 IOPS, 17.19 MiB/s 00:37:34.273 Latency(us) 00:37:34.273 [2024-11-18T11:06:00.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:34.273 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:34.273 Verification LBA range: start 0x0 length 0x4000 00:37:34.273 Nvme1n1 : 15.02 4405.29 17.21 9311.04 0.00 9303.03 1104.40 42331.40 00:37:34.273 [2024-11-18T11:06:00.158Z] =================================================================================================================== 00:37:34.273 [2024-11-18T11:06:00.158Z] Total : 4405.29 17.21 9311.04 0.00 9303.03 1104.40 42331.40 00:37:35.207 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:37:35.207 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:35.207 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:35.207 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:35.207 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:35.207 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:37:35.207 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:37:35.207 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:35.207 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:37:35.207 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:35.207 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:37:35.207 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:35.207 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:35.207 rmmod nvme_tcp 00:37:35.207 rmmod nvme_fabrics 00:37:35.207 rmmod nvme_keyring 00:37:35.207 12:06:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:35.207 12:06:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:37:35.207 12:06:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:37:35.207 12:06:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3138234 ']' 00:37:35.207 12:06:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3138234 00:37:35.207 12:06:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3138234 ']' 00:37:35.207 12:06:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3138234 00:37:35.207 12:06:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:37:35.207 12:06:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:35.207 12:06:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3138234 00:37:35.207 12:06:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:35.207 12:06:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:35.207 12:06:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3138234' 00:37:35.207 killing process with pid 3138234 00:37:35.207 12:06:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3138234 00:37:35.207 12:06:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3138234 00:37:36.582 12:06:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:36.582 12:06:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:36.582 12:06:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:36.582 12:06:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:37:36.582 12:06:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:37:36.582 12:06:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:36.582 12:06:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:37:36.582 12:06:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:36.582 12:06:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:36.582 12:06:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:36.582 12:06:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:36.582 12:06:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:38.487 12:06:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:38.487 00:37:38.487 real 0m26.401s 00:37:38.487 user 1m12.597s 00:37:38.487 sys 0m4.554s 00:37:38.487 12:06:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:38.487 12:06:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:38.487 ************************************ 00:37:38.487 END TEST nvmf_bdevperf 00:37:38.487 ************************************ 00:37:38.487 12:06:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:38.487 12:06:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:38.487 12:06:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:38.487 12:06:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:38.746 ************************************ 00:37:38.746 START TEST nvmf_target_disconnect 00:37:38.746 ************************************ 00:37:38.746 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:38.746 * Looking for test storage... 00:37:38.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:38.746 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:38.746 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:37:38.746 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:38.746 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:38.746 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:38.746 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:38.746 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:38.746 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:37:38.746 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:37:38.746 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:37:38.746 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:37:38.746 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:38.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:38.747 --rc genhtml_branch_coverage=1 00:37:38.747 --rc genhtml_function_coverage=1 00:37:38.747 --rc genhtml_legend=1 00:37:38.747 --rc geninfo_all_blocks=1 00:37:38.747 --rc geninfo_unexecuted_blocks=1 00:37:38.747 00:37:38.747 ' 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:38.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:38.747 --rc genhtml_branch_coverage=1 00:37:38.747 --rc genhtml_function_coverage=1 00:37:38.747 --rc genhtml_legend=1 00:37:38.747 --rc geninfo_all_blocks=1 00:37:38.747 --rc geninfo_unexecuted_blocks=1 00:37:38.747 00:37:38.747 ' 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:38.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:38.747 --rc genhtml_branch_coverage=1 00:37:38.747 --rc genhtml_function_coverage=1 00:37:38.747 --rc genhtml_legend=1 00:37:38.747 --rc geninfo_all_blocks=1 00:37:38.747 --rc geninfo_unexecuted_blocks=1 00:37:38.747 00:37:38.747 ' 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:38.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:38.747 --rc genhtml_branch_coverage=1 00:37:38.747 --rc genhtml_function_coverage=1 00:37:38.747 --rc genhtml_legend=1 00:37:38.747 --rc geninfo_all_blocks=1 00:37:38.747 --rc geninfo_unexecuted_blocks=1 00:37:38.747 00:37:38.747 ' 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:38.747 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:38.747 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:38.748 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:38.748 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:38.748 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:38.748 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:38.748 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:37:38.748 12:06:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:41.280 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:41.280 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:41.280 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:41.280 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:41.280 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:41.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:41.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:37:41.281 00:37:41.281 --- 10.0.0.2 ping statistics --- 00:37:41.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:41.281 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:37:41.281 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:41.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:41.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:37:41.281 00:37:41.281 --- 10.0.0.1 ping statistics --- 00:37:41.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:41.281 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:37:41.281 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:41.281 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:37:41.281 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:41.281 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:41.281 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:41.281 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:41.281 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:41.281 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:41.281 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:41.281 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:37:41.281 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:41.281 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:41.281 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:41.281 ************************************ 00:37:41.281 START TEST nvmf_target_disconnect_tc1 00:37:41.281 ************************************ 00:37:41.281 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:37:41.281 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:41.281 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:37:41.281 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:41.281 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:41.281 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:41.281 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:41.281 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:41.281 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:41.281 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:41.281 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:41.281 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:37:41.281 12:06:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:41.281 [2024-11-18 12:06:07.015576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.281 [2024-11-18 12:06:07.015704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:37:41.281 [2024-11-18 12:06:07.015796] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:37:41.281 [2024-11-18 12:06:07.015828] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:41.281 [2024-11-18 12:06:07.015854] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:37:41.281 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:37:41.281 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:37:41.281 Initializing NVMe Controllers 00:37:41.281 12:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:37:41.281 12:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:41.281 12:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:41.281 12:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:41.281 00:37:41.281 real 0m0.234s 00:37:41.281 user 0m0.111s 00:37:41.281 sys 0m0.122s 00:37:41.281 12:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:41.281 12:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:37:41.281 ************************************ 00:37:41.281 END TEST nvmf_target_disconnect_tc1 00:37:41.281 ************************************ 00:37:41.281 12:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:37:41.281 12:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:41.281 12:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:41.281 12:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:41.281 ************************************ 00:37:41.281 START TEST nvmf_target_disconnect_tc2 00:37:41.281 ************************************ 00:37:41.281 12:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:37:41.281 12:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:37:41.281 12:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:41.281 12:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:41.281 12:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:41.281 12:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:41.281 12:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3141657 00:37:41.281 12:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:41.281 12:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3141657 00:37:41.281 12:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3141657 ']' 00:37:41.281 12:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:41.281 12:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:41.281 12:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:41.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:41.281 12:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:41.281 12:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:41.539 [2024-11-18 12:06:07.203516] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:37:41.539 [2024-11-18 12:06:07.203683] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:41.539 [2024-11-18 12:06:07.355692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:41.797 [2024-11-18 12:06:07.487459] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:41.797 [2024-11-18 12:06:07.487562] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:41.797 [2024-11-18 12:06:07.487586] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:41.797 [2024-11-18 12:06:07.487608] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:41.797 [2024-11-18 12:06:07.487625] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:41.797 [2024-11-18 12:06:07.490200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:41.797 [2024-11-18 12:06:07.490304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:41.797 [2024-11-18 12:06:07.490369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:41.797 [2024-11-18 12:06:07.490373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:37:42.363 12:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:42.363 12:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:37:42.363 12:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:42.363 12:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:42.363 12:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:42.363 12:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:42.364 12:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:42.364 12:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.364 12:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:42.622 Malloc0 00:37:42.622 12:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.622 12:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:42.622 12:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.622 12:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:42.622 [2024-11-18 12:06:08.290822] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:42.622 12:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.622 12:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:42.622 12:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.622 12:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:42.622 12:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.622 12:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:42.622 12:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.622 12:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:42.622 12:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.622 12:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:42.622 12:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.622 12:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:42.622 [2024-11-18 12:06:08.321047] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:42.622 12:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.622 12:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:42.622 12:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.622 12:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:42.622 12:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.622 12:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3142266 00:37:42.622 12:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:42.622 12:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:37:44.527 12:06:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3141657 00:37:44.527 12:06:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:37:44.527 Read completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Read completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Read completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Read completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Read completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Read completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Write completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Read completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Write completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Write completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Read completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Write completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Write completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Read completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Write completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Write completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Write completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Read completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Write completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Write completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Read completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Read completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Read completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Read completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Write completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Write completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Read completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Write completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Write completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Write completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Read completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Write completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Read completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Read completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Read completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 [2024-11-18 12:06:10.358509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.527 Read completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Write completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Read completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Write completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Read completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Write completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Write completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Write completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Write completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Write completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Read completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Write completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Write completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Read completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Write completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Write completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Write completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Read completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Write completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Read completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Read completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Write completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Write completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Read completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Read completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Write completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Read completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Read completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.527 Read completed with error (sct=0, sc=8) 00:37:44.527 starting I/O failed 00:37:44.528 [2024-11-18 12:06:10.359144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Write completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Write completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Write completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Write completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Write completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Write completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Write completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Write completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Write completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 [2024-11-18 12:06:10.359814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Write completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Write completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Write completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Write completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Write completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Write completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Write completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Write completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Write completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Write completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 Read completed with error (sct=0, sc=8) 00:37:44.528 starting I/O failed 00:37:44.528 [2024-11-18 12:06:10.360418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.528 [2024-11-18 12:06:10.360654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.528 [2024-11-18 12:06:10.360707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.528 qpair failed and we were unable to recover it. 00:37:44.528 [2024-11-18 12:06:10.360865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.528 [2024-11-18 12:06:10.360901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.528 qpair failed and we were unable to recover it. 00:37:44.528 [2024-11-18 12:06:10.361057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.528 [2024-11-18 12:06:10.361091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.528 qpair failed and we were unable to recover it. 00:37:44.528 [2024-11-18 12:06:10.361209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.528 [2024-11-18 12:06:10.361244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.528 qpair failed and we were unable to recover it. 00:37:44.528 [2024-11-18 12:06:10.361408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.528 [2024-11-18 12:06:10.361458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.528 qpair failed and we were unable to recover it. 00:37:44.528 [2024-11-18 12:06:10.361592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.528 [2024-11-18 12:06:10.361629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.528 qpair failed and we were unable to recover it. 00:37:44.528 [2024-11-18 12:06:10.361805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.528 [2024-11-18 12:06:10.361841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.528 qpair failed and we were unable to recover it. 00:37:44.528 [2024-11-18 12:06:10.361957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.528 [2024-11-18 12:06:10.361991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.528 qpair failed and we were unable to recover it. 00:37:44.528 [2024-11-18 12:06:10.362163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.528 [2024-11-18 12:06:10.362196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.528 qpair failed and we were unable to recover it. 00:37:44.528 [2024-11-18 12:06:10.362332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.528 [2024-11-18 12:06:10.362367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.528 qpair failed and we were unable to recover it. 00:37:44.528 [2024-11-18 12:06:10.362518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.528 [2024-11-18 12:06:10.362567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.528 qpair failed and we were unable to recover it. 00:37:44.528 [2024-11-18 12:06:10.362703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.528 [2024-11-18 12:06:10.362752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.528 qpair failed and we were unable to recover it. 00:37:44.528 [2024-11-18 12:06:10.362969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.528 [2024-11-18 12:06:10.363017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.528 qpair failed and we were unable to recover it. 00:37:44.528 [2024-11-18 12:06:10.363289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.528 [2024-11-18 12:06:10.363325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.528 qpair failed and we were unable to recover it. 00:37:44.528 [2024-11-18 12:06:10.363464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.528 [2024-11-18 12:06:10.363518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.528 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.363627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.363661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.363811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.363852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.364032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.364066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.364234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.364283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.364405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.364442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.364566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.364601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.364708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.364743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.364865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.364900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.365065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.365101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.365243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.365278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.365415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.365449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.365577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.365618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.365723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.365757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.366723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.366758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.366945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.366980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.367118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.367152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.367329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.367364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.367506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.367541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.367668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.367717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.367884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.367948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.368116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.368153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.368327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.368362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.368484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.368527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.368635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.368669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.368812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.368847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.369084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.369144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.369307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.369341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.369484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.369526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.369670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.369709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.369866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.369913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.370062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.370097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.370240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.370275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.370437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.370481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.370641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.370676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.370799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.370833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.371064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.371100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.371279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.371314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.529 qpair failed and we were unable to recover it. 00:37:44.529 [2024-11-18 12:06:10.371485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.529 [2024-11-18 12:06:10.371526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-11-18 12:06:10.371642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-11-18 12:06:10.371677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-11-18 12:06:10.371805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-11-18 12:06:10.371848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-11-18 12:06:10.371996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-11-18 12:06:10.372034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-11-18 12:06:10.372191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-11-18 12:06:10.372226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-11-18 12:06:10.372364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-11-18 12:06:10.372400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-11-18 12:06:10.372596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-11-18 12:06:10.372644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-11-18 12:06:10.372795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-11-18 12:06:10.372832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-11-18 12:06:10.372993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-11-18 12:06:10.373028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-11-18 12:06:10.373144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-11-18 12:06:10.373179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-11-18 12:06:10.373313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-11-18 12:06:10.373349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-11-18 12:06:10.373482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-11-18 12:06:10.373524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-11-18 12:06:10.373641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-11-18 12:06:10.373677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-11-18 12:06:10.373792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-11-18 12:06:10.373827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-11-18 12:06:10.373962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-11-18 12:06:10.374001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-11-18 12:06:10.374167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-11-18 12:06:10.374203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-11-18 12:06:10.374318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-11-18 12:06:10.374352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-11-18 12:06:10.374457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-11-18 12:06:10.374497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-11-18 12:06:10.374616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-11-18 12:06:10.374649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-11-18 12:06:10.374759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-11-18 12:06:10.374793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-11-18 12:06:10.374942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-11-18 12:06:10.374976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-11-18 12:06:10.375115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-11-18 12:06:10.375151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-11-18 12:06:10.375285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-11-18 12:06:10.375320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-11-18 12:06:10.375444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-11-18 12:06:10.375507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-11-18 12:06:10.375658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-11-18 12:06:10.375693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-11-18 12:06:10.375847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-11-18 12:06:10.375895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-11-18 12:06:10.376041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-11-18 12:06:10.376088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-11-18 12:06:10.376237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-11-18 12:06:10.376271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-11-18 12:06:10.376377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-11-18 12:06:10.376411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-11-18 12:06:10.376564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-11-18 12:06:10.376599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-11-18 12:06:10.376697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-11-18 12:06:10.376731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-11-18 12:06:10.376854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-11-18 12:06:10.376888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-11-18 12:06:10.377030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-11-18 12:06:10.377097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-11-18 12:06:10.377280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-11-18 12:06:10.377317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-11-18 12:06:10.377481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-11-18 12:06:10.377524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-11-18 12:06:10.377639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-11-18 12:06:10.377673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-11-18 12:06:10.377802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-11-18 12:06:10.377838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.530 [2024-11-18 12:06:10.377995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.530 [2024-11-18 12:06:10.378048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.530 qpair failed and we were unable to recover it. 00:37:44.531 [2024-11-18 12:06:10.378197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-11-18 12:06:10.378232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-11-18 12:06:10.378343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-11-18 12:06:10.378377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-11-18 12:06:10.378512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-11-18 12:06:10.378546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-11-18 12:06:10.378703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-11-18 12:06:10.378759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-11-18 12:06:10.378919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-11-18 12:06:10.378969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-11-18 12:06:10.379179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-11-18 12:06:10.379213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-11-18 12:06:10.379373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-11-18 12:06:10.379407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-11-18 12:06:10.379563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-11-18 12:06:10.379597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-11-18 12:06:10.379749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-11-18 12:06:10.379805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-11-18 12:06:10.380052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-11-18 12:06:10.380105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-11-18 12:06:10.380246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-11-18 12:06:10.380279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-11-18 12:06:10.380408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-11-18 12:06:10.380441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-11-18 12:06:10.380580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-11-18 12:06:10.380614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-11-18 12:06:10.380718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-11-18 12:06:10.380752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-11-18 12:06:10.380869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-11-18 12:06:10.380902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-11-18 12:06:10.381030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-11-18 12:06:10.381064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-11-18 12:06:10.381170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-11-18 12:06:10.381204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-11-18 12:06:10.381346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-11-18 12:06:10.381392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-11-18 12:06:10.381529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-11-18 12:06:10.381563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-11-18 12:06:10.381698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-11-18 12:06:10.381731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-11-18 12:06:10.381834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-11-18 12:06:10.381869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-11-18 12:06:10.382000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-11-18 12:06:10.382035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-11-18 12:06:10.382175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-11-18 12:06:10.382209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-11-18 12:06:10.382353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-11-18 12:06:10.382401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-11-18 12:06:10.382563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-11-18 12:06:10.382600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-11-18 12:06:10.382745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-11-18 12:06:10.382787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-11-18 12:06:10.382978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-11-18 12:06:10.383011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-11-18 12:06:10.383117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-11-18 12:06:10.383151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-11-18 12:06:10.383342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-11-18 12:06:10.383379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-11-18 12:06:10.383536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-11-18 12:06:10.383572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-11-18 12:06:10.383700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-11-18 12:06:10.383754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-11-18 12:06:10.383923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-11-18 12:06:10.383976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-11-18 12:06:10.384240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-11-18 12:06:10.384296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-11-18 12:06:10.384428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-11-18 12:06:10.384463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-11-18 12:06:10.384622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-11-18 12:06:10.384674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.531 [2024-11-18 12:06:10.384792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.531 [2024-11-18 12:06:10.384827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.531 qpair failed and we were unable to recover it. 00:37:44.532 [2024-11-18 12:06:10.384966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-11-18 12:06:10.384999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-11-18 12:06:10.385130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-11-18 12:06:10.385163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-11-18 12:06:10.385303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-11-18 12:06:10.385336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-11-18 12:06:10.385501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-11-18 12:06:10.385550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-11-18 12:06:10.385716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-11-18 12:06:10.385780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-11-18 12:06:10.386098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-11-18 12:06:10.386157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-11-18 12:06:10.386351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-11-18 12:06:10.386414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-11-18 12:06:10.386570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-11-18 12:06:10.386610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-11-18 12:06:10.386780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-11-18 12:06:10.386814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-11-18 12:06:10.386919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-11-18 12:06:10.386952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-11-18 12:06:10.387087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-11-18 12:06:10.387121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-11-18 12:06:10.387291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-11-18 12:06:10.387325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-11-18 12:06:10.387444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-11-18 12:06:10.387502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-11-18 12:06:10.387672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-11-18 12:06:10.387720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-11-18 12:06:10.387876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-11-18 12:06:10.387923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-11-18 12:06:10.388055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-11-18 12:06:10.388093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-11-18 12:06:10.388192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-11-18 12:06:10.388228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-11-18 12:06:10.388369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-11-18 12:06:10.388404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-11-18 12:06:10.388529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-11-18 12:06:10.388564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-11-18 12:06:10.388672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-11-18 12:06:10.388706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-11-18 12:06:10.388824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-11-18 12:06:10.388859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-11-18 12:06:10.389000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-11-18 12:06:10.389034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-11-18 12:06:10.389183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-11-18 12:06:10.389231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-11-18 12:06:10.389357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-11-18 12:06:10.389406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-11-18 12:06:10.389535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-11-18 12:06:10.389583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-11-18 12:06:10.389698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-11-18 12:06:10.389735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-11-18 12:06:10.389840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-11-18 12:06:10.389874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-11-18 12:06:10.389986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-11-18 12:06:10.390020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-11-18 12:06:10.390221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-11-18 12:06:10.390285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-11-18 12:06:10.390478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-11-18 12:06:10.390537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.532 qpair failed and we were unable to recover it. 00:37:44.532 [2024-11-18 12:06:10.390695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.532 [2024-11-18 12:06:10.390728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-11-18 12:06:10.390864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-11-18 12:06:10.390901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-11-18 12:06:10.391038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-11-18 12:06:10.391085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-11-18 12:06:10.391256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-11-18 12:06:10.391290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-11-18 12:06:10.391431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-11-18 12:06:10.391465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-11-18 12:06:10.391625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-11-18 12:06:10.391678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-11-18 12:06:10.391824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-11-18 12:06:10.391878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-11-18 12:06:10.392116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-11-18 12:06:10.392176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-11-18 12:06:10.392326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-11-18 12:06:10.392360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-11-18 12:06:10.392473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-11-18 12:06:10.392517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-11-18 12:06:10.392642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-11-18 12:06:10.392695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-11-18 12:06:10.392828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-11-18 12:06:10.392863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-11-18 12:06:10.393011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-11-18 12:06:10.393059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-11-18 12:06:10.393204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-11-18 12:06:10.393242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-11-18 12:06:10.393397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-11-18 12:06:10.393433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-11-18 12:06:10.393550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-11-18 12:06:10.393586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-11-18 12:06:10.393698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-11-18 12:06:10.393732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-11-18 12:06:10.393874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-11-18 12:06:10.393915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-11-18 12:06:10.394108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-11-18 12:06:10.394147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-11-18 12:06:10.394292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-11-18 12:06:10.394342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-11-18 12:06:10.394506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-11-18 12:06:10.394541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-11-18 12:06:10.394678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-11-18 12:06:10.394712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-11-18 12:06:10.394848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-11-18 12:06:10.394882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-11-18 12:06:10.395023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-11-18 12:06:10.395056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-11-18 12:06:10.395231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-11-18 12:06:10.395286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-11-18 12:06:10.395459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-11-18 12:06:10.395520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-11-18 12:06:10.395654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-11-18 12:06:10.395702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-11-18 12:06:10.395843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-11-18 12:06:10.395878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-11-18 12:06:10.396012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-11-18 12:06:10.396046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-11-18 12:06:10.396223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-11-18 12:06:10.396258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-11-18 12:06:10.396445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-11-18 12:06:10.396483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-11-18 12:06:10.396626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-11-18 12:06:10.396661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-11-18 12:06:10.396802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-11-18 12:06:10.396836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-11-18 12:06:10.396995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-11-18 12:06:10.397030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-11-18 12:06:10.397127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-11-18 12:06:10.397173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-11-18 12:06:10.397290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-11-18 12:06:10.397324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.533 [2024-11-18 12:06:10.397428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.533 [2024-11-18 12:06:10.397462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.533 qpair failed and we were unable to recover it. 00:37:44.534 [2024-11-18 12:06:10.397607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-11-18 12:06:10.397642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-11-18 12:06:10.397777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-11-18 12:06:10.397812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-11-18 12:06:10.397918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-11-18 12:06:10.397952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-11-18 12:06:10.398118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-11-18 12:06:10.398155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-11-18 12:06:10.398316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-11-18 12:06:10.398363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-11-18 12:06:10.398557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-11-18 12:06:10.398594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-11-18 12:06:10.398773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-11-18 12:06:10.398808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-11-18 12:06:10.398955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-11-18 12:06:10.399008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-11-18 12:06:10.399239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-11-18 12:06:10.399307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-11-18 12:06:10.399444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-11-18 12:06:10.399482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-11-18 12:06:10.399615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-11-18 12:06:10.399663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-11-18 12:06:10.399848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-11-18 12:06:10.399885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-11-18 12:06:10.399997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-11-18 12:06:10.400033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-11-18 12:06:10.400174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-11-18 12:06:10.400234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-11-18 12:06:10.400408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-11-18 12:06:10.400442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-11-18 12:06:10.400599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-11-18 12:06:10.400635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-11-18 12:06:10.400763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-11-18 12:06:10.400804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-11-18 12:06:10.400927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-11-18 12:06:10.400966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-11-18 12:06:10.401107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-11-18 12:06:10.401146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-11-18 12:06:10.401300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-11-18 12:06:10.401338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-11-18 12:06:10.401515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-11-18 12:06:10.401557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-11-18 12:06:10.401693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-11-18 12:06:10.401728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-11-18 12:06:10.401872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-11-18 12:06:10.401906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-11-18 12:06:10.402070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-11-18 12:06:10.402104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-11-18 12:06:10.402214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-11-18 12:06:10.402248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-11-18 12:06:10.402362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-11-18 12:06:10.402396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-11-18 12:06:10.402559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-11-18 12:06:10.402594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-11-18 12:06:10.402710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-11-18 12:06:10.402758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-11-18 12:06:10.402929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-11-18 12:06:10.402977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-11-18 12:06:10.403099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-11-18 12:06:10.403136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-11-18 12:06:10.403299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-11-18 12:06:10.403333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-11-18 12:06:10.403458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-11-18 12:06:10.403509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-11-18 12:06:10.403634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-11-18 12:06:10.403670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-11-18 12:06:10.403811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-11-18 12:06:10.403849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-11-18 12:06:10.404012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-11-18 12:06:10.404046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-11-18 12:06:10.404247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.534 [2024-11-18 12:06:10.404282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.534 qpair failed and we were unable to recover it. 00:37:44.534 [2024-11-18 12:06:10.404411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-11-18 12:06:10.404446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-11-18 12:06:10.404563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-11-18 12:06:10.404601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-11-18 12:06:10.404746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-11-18 12:06:10.404781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-11-18 12:06:10.404896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-11-18 12:06:10.404930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-11-18 12:06:10.405036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-11-18 12:06:10.405070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-11-18 12:06:10.405206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-11-18 12:06:10.405253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-11-18 12:06:10.405398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-11-18 12:06:10.405435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-11-18 12:06:10.405575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-11-18 12:06:10.405615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-11-18 12:06:10.405770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-11-18 12:06:10.405834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-11-18 12:06:10.405993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-11-18 12:06:10.406029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-11-18 12:06:10.406262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-11-18 12:06:10.406324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-11-18 12:06:10.406458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-11-18 12:06:10.406500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-11-18 12:06:10.406610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-11-18 12:06:10.406644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-11-18 12:06:10.406841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-11-18 12:06:10.406889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-11-18 12:06:10.407022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-11-18 12:06:10.407062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-11-18 12:06:10.407203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-11-18 12:06:10.407242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-11-18 12:06:10.407395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-11-18 12:06:10.407428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-11-18 12:06:10.407565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-11-18 12:06:10.407613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-11-18 12:06:10.407742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-11-18 12:06:10.407790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-11-18 12:06:10.407975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-11-18 12:06:10.408016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-11-18 12:06:10.408144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-11-18 12:06:10.408178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-11-18 12:06:10.408324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-11-18 12:06:10.408363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-11-18 12:06:10.408486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-11-18 12:06:10.408547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-11-18 12:06:10.408691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-11-18 12:06:10.408726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-11-18 12:06:10.408857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-11-18 12:06:10.408911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.535 [2024-11-18 12:06:10.409069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.535 [2024-11-18 12:06:10.409105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.535 qpair failed and we were unable to recover it. 00:37:44.815 [2024-11-18 12:06:10.409247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.815 [2024-11-18 12:06:10.409281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.815 qpair failed and we were unable to recover it. 00:37:44.815 [2024-11-18 12:06:10.409451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.815 [2024-11-18 12:06:10.409501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.815 qpair failed and we were unable to recover it. 00:37:44.815 [2024-11-18 12:06:10.409658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.815 [2024-11-18 12:06:10.409695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.815 qpair failed and we were unable to recover it. 00:37:44.815 [2024-11-18 12:06:10.409822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.815 [2024-11-18 12:06:10.409861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.815 qpair failed and we were unable to recover it. 00:37:44.815 [2024-11-18 12:06:10.409989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.815 [2024-11-18 12:06:10.410028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.815 qpair failed and we were unable to recover it. 00:37:44.815 [2024-11-18 12:06:10.410250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.815 [2024-11-18 12:06:10.410288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.815 qpair failed and we were unable to recover it. 00:37:44.815 [2024-11-18 12:06:10.410434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.815 [2024-11-18 12:06:10.410467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.815 qpair failed and we were unable to recover it. 00:37:44.815 [2024-11-18 12:06:10.410609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.815 [2024-11-18 12:06:10.410644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.815 qpair failed and we were unable to recover it. 00:37:44.815 [2024-11-18 12:06:10.410793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.815 [2024-11-18 12:06:10.410861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.815 qpair failed and we were unable to recover it. 00:37:44.815 [2024-11-18 12:06:10.411033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.815 [2024-11-18 12:06:10.411069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.815 qpair failed and we were unable to recover it. 00:37:44.815 [2024-11-18 12:06:10.411194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.815 [2024-11-18 12:06:10.411243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.815 qpair failed and we were unable to recover it. 00:37:44.815 [2024-11-18 12:06:10.411357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.815 [2024-11-18 12:06:10.411391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.815 qpair failed and we were unable to recover it. 00:37:44.815 [2024-11-18 12:06:10.411541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.815 [2024-11-18 12:06:10.411579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.815 qpair failed and we were unable to recover it. 00:37:44.815 [2024-11-18 12:06:10.411691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.815 [2024-11-18 12:06:10.411728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.815 qpair failed and we were unable to recover it. 00:37:44.815 [2024-11-18 12:06:10.411866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.815 [2024-11-18 12:06:10.411920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.815 qpair failed and we were unable to recover it. 00:37:44.815 [2024-11-18 12:06:10.412172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.815 [2024-11-18 12:06:10.412207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.815 qpair failed and we were unable to recover it. 00:37:44.815 [2024-11-18 12:06:10.412310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.816 [2024-11-18 12:06:10.412344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.816 qpair failed and we were unable to recover it. 00:37:44.816 [2024-11-18 12:06:10.412483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.816 [2024-11-18 12:06:10.412537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.816 qpair failed and we were unable to recover it. 00:37:44.816 [2024-11-18 12:06:10.412649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.816 [2024-11-18 12:06:10.412685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.816 qpair failed and we were unable to recover it. 00:37:44.816 [2024-11-18 12:06:10.412853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.816 [2024-11-18 12:06:10.412887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.816 qpair failed and we were unable to recover it. 00:37:44.816 [2024-11-18 12:06:10.413032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.816 [2024-11-18 12:06:10.413065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.816 qpair failed and we were unable to recover it. 00:37:44.816 [2024-11-18 12:06:10.413199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.816 [2024-11-18 12:06:10.413233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.816 qpair failed and we were unable to recover it. 00:37:44.816 [2024-11-18 12:06:10.413405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.816 [2024-11-18 12:06:10.413441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.816 qpair failed and we were unable to recover it. 00:37:44.816 [2024-11-18 12:06:10.413586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.816 [2024-11-18 12:06:10.413641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.816 qpair failed and we were unable to recover it. 00:37:44.816 [2024-11-18 12:06:10.413820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.816 [2024-11-18 12:06:10.413886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.816 qpair failed and we were unable to recover it. 00:37:44.816 [2024-11-18 12:06:10.414122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.816 [2024-11-18 12:06:10.414180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.816 qpair failed and we were unable to recover it. 00:37:44.816 [2024-11-18 12:06:10.414337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.816 [2024-11-18 12:06:10.414376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.816 qpair failed and we were unable to recover it. 00:37:44.816 [2024-11-18 12:06:10.414512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.816 [2024-11-18 12:06:10.414563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.816 qpair failed and we were unable to recover it. 00:37:44.816 [2024-11-18 12:06:10.414688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.816 [2024-11-18 12:06:10.414727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.816 qpair failed and we were unable to recover it. 00:37:44.816 [2024-11-18 12:06:10.415021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.816 [2024-11-18 12:06:10.415074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.816 qpair failed and we were unable to recover it. 00:37:44.816 [2024-11-18 12:06:10.415222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.816 [2024-11-18 12:06:10.415274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.816 qpair failed and we were unable to recover it. 00:37:44.816 [2024-11-18 12:06:10.415448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.816 [2024-11-18 12:06:10.415486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.816 qpair failed and we were unable to recover it. 00:37:44.816 [2024-11-18 12:06:10.415649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.816 [2024-11-18 12:06:10.415683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.816 qpair failed and we were unable to recover it. 00:37:44.816 [2024-11-18 12:06:10.415807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.816 [2024-11-18 12:06:10.415852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.816 qpair failed and we were unable to recover it. 00:37:44.816 [2024-11-18 12:06:10.415993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.816 [2024-11-18 12:06:10.416026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.816 qpair failed and we were unable to recover it. 00:37:44.816 [2024-11-18 12:06:10.416151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.816 [2024-11-18 12:06:10.416201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.816 qpair failed and we were unable to recover it. 00:37:44.816 [2024-11-18 12:06:10.416360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.816 [2024-11-18 12:06:10.416397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.816 qpair failed and we were unable to recover it. 00:37:44.816 [2024-11-18 12:06:10.416564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.816 [2024-11-18 12:06:10.416605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.816 qpair failed and we were unable to recover it. 00:37:44.816 [2024-11-18 12:06:10.416751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.816 [2024-11-18 12:06:10.416833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.816 qpair failed and we were unable to recover it. 00:37:44.816 [2024-11-18 12:06:10.416983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.816 [2024-11-18 12:06:10.417018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.816 qpair failed and we were unable to recover it. 00:37:44.816 [2024-11-18 12:06:10.417151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.816 [2024-11-18 12:06:10.417186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.816 qpair failed and we were unable to recover it. 00:37:44.816 [2024-11-18 12:06:10.417329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.816 [2024-11-18 12:06:10.417364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.816 qpair failed and we were unable to recover it. 00:37:44.816 [2024-11-18 12:06:10.417485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.816 [2024-11-18 12:06:10.417543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.816 qpair failed and we were unable to recover it. 00:37:44.816 [2024-11-18 12:06:10.417663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.816 [2024-11-18 12:06:10.417701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.816 qpair failed and we were unable to recover it. 00:37:44.816 [2024-11-18 12:06:10.417856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.816 [2024-11-18 12:06:10.417891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.816 qpair failed and we were unable to recover it. 00:37:44.816 [2024-11-18 12:06:10.418051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.816 [2024-11-18 12:06:10.418100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.816 qpair failed and we were unable to recover it. 00:37:44.816 [2024-11-18 12:06:10.418246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.816 [2024-11-18 12:06:10.418299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.816 qpair failed and we were unable to recover it. 00:37:44.816 [2024-11-18 12:06:10.418462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.816 [2024-11-18 12:06:10.418513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.816 qpair failed and we were unable to recover it. 00:37:44.816 [2024-11-18 12:06:10.418670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.816 [2024-11-18 12:06:10.418719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.816 qpair failed and we were unable to recover it. 00:37:44.816 [2024-11-18 12:06:10.418855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.816 [2024-11-18 12:06:10.418910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.816 qpair failed and we were unable to recover it. 00:37:44.816 [2024-11-18 12:06:10.419112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.816 [2024-11-18 12:06:10.419171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.816 qpair failed and we were unable to recover it. 00:37:44.816 [2024-11-18 12:06:10.419386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.816 [2024-11-18 12:06:10.419420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.816 qpair failed and we were unable to recover it. 00:37:44.816 [2024-11-18 12:06:10.419575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.816 [2024-11-18 12:06:10.419611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.816 qpair failed and we were unable to recover it. 00:37:44.816 [2024-11-18 12:06:10.419749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.419806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.817 qpair failed and we were unable to recover it. 00:37:44.817 [2024-11-18 12:06:10.419967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.420018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.817 qpair failed and we were unable to recover it. 00:37:44.817 [2024-11-18 12:06:10.420118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.420152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.817 qpair failed and we were unable to recover it. 00:37:44.817 [2024-11-18 12:06:10.420294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.420328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.817 qpair failed and we were unable to recover it. 00:37:44.817 [2024-11-18 12:06:10.420467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.420519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.817 qpair failed and we were unable to recover it. 00:37:44.817 [2024-11-18 12:06:10.420630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.420664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.817 qpair failed and we were unable to recover it. 00:37:44.817 [2024-11-18 12:06:10.420805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.420840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.817 qpair failed and we were unable to recover it. 00:37:44.817 [2024-11-18 12:06:10.420977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.421011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.817 qpair failed and we were unable to recover it. 00:37:44.817 [2024-11-18 12:06:10.421152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.421186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.817 qpair failed and we were unable to recover it. 00:37:44.817 [2024-11-18 12:06:10.421320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.421355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.817 qpair failed and we were unable to recover it. 00:37:44.817 [2024-11-18 12:06:10.421509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.421558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.817 qpair failed and we were unable to recover it. 00:37:44.817 [2024-11-18 12:06:10.421688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.421736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.817 qpair failed and we were unable to recover it. 00:37:44.817 [2024-11-18 12:06:10.421910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.421963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.817 qpair failed and we were unable to recover it. 00:37:44.817 [2024-11-18 12:06:10.422147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.422199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.817 qpair failed and we were unable to recover it. 00:37:44.817 [2024-11-18 12:06:10.422321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.422355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.817 qpair failed and we were unable to recover it. 00:37:44.817 [2024-11-18 12:06:10.422544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.422605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.817 qpair failed and we were unable to recover it. 00:37:44.817 [2024-11-18 12:06:10.422742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.422787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.817 qpair failed and we were unable to recover it. 00:37:44.817 [2024-11-18 12:06:10.422934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.422987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.817 qpair failed and we were unable to recover it. 00:37:44.817 [2024-11-18 12:06:10.423124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.423177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.817 qpair failed and we were unable to recover it. 00:37:44.817 [2024-11-18 12:06:10.423306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.423340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.817 qpair failed and we were unable to recover it. 00:37:44.817 [2024-11-18 12:06:10.423447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.423488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.817 qpair failed and we were unable to recover it. 00:37:44.817 [2024-11-18 12:06:10.423664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.423712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.817 qpair failed and we were unable to recover it. 00:37:44.817 [2024-11-18 12:06:10.423941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.423982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.817 qpair failed and we were unable to recover it. 00:37:44.817 [2024-11-18 12:06:10.424182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.424244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.817 qpair failed and we were unable to recover it. 00:37:44.817 [2024-11-18 12:06:10.424403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.424441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.817 qpair failed and we were unable to recover it. 00:37:44.817 [2024-11-18 12:06:10.424628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.424669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.817 qpair failed and we were unable to recover it. 00:37:44.817 [2024-11-18 12:06:10.424855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.424922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.817 qpair failed and we were unable to recover it. 00:37:44.817 [2024-11-18 12:06:10.425142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.425183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.817 qpair failed and we were unable to recover it. 00:37:44.817 [2024-11-18 12:06:10.425371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.425443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.817 qpair failed and we were unable to recover it. 00:37:44.817 [2024-11-18 12:06:10.425628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.425662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.817 qpair failed and we were unable to recover it. 00:37:44.817 [2024-11-18 12:06:10.425798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.425832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.817 qpair failed and we were unable to recover it. 00:37:44.817 [2024-11-18 12:06:10.426012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.426050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.817 qpair failed and we were unable to recover it. 00:37:44.817 [2024-11-18 12:06:10.426196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.426233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.817 qpair failed and we were unable to recover it. 00:37:44.817 [2024-11-18 12:06:10.426374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.426428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.817 qpair failed and we were unable to recover it. 00:37:44.817 [2024-11-18 12:06:10.426670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.426718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.817 qpair failed and we were unable to recover it. 00:37:44.817 [2024-11-18 12:06:10.426877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.426935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.817 qpair failed and we were unable to recover it. 00:37:44.817 [2024-11-18 12:06:10.427099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.427151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.817 qpair failed and we were unable to recover it. 00:37:44.817 [2024-11-18 12:06:10.427287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.817 [2024-11-18 12:06:10.427338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.427486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.818 [2024-11-18 12:06:10.427530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.427673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.818 [2024-11-18 12:06:10.427707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.427871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.818 [2024-11-18 12:06:10.427920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.428092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.818 [2024-11-18 12:06:10.428129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.428235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.818 [2024-11-18 12:06:10.428269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.428427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.818 [2024-11-18 12:06:10.428460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.428615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.818 [2024-11-18 12:06:10.428649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.428831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.818 [2024-11-18 12:06:10.428884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.429094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.818 [2024-11-18 12:06:10.429153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.429350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.818 [2024-11-18 12:06:10.429407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.429558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.818 [2024-11-18 12:06:10.429593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.429715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.818 [2024-11-18 12:06:10.429780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.429899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.818 [2024-11-18 12:06:10.429952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.430151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.818 [2024-11-18 12:06:10.430203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.430346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.818 [2024-11-18 12:06:10.430412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.430535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.818 [2024-11-18 12:06:10.430570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.430679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.818 [2024-11-18 12:06:10.430713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.430863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.818 [2024-11-18 12:06:10.430898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.431059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.818 [2024-11-18 12:06:10.431093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.431224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.818 [2024-11-18 12:06:10.431258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.431400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.818 [2024-11-18 12:06:10.431439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.431609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.818 [2024-11-18 12:06:10.431665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.431820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.818 [2024-11-18 12:06:10.431856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.432027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.818 [2024-11-18 12:06:10.432063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.432182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.818 [2024-11-18 12:06:10.432218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.432371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.818 [2024-11-18 12:06:10.432411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.432640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.818 [2024-11-18 12:06:10.432695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.432869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.818 [2024-11-18 12:06:10.432927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.433061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.818 [2024-11-18 12:06:10.433096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.433252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.818 [2024-11-18 12:06:10.433287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.433436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.818 [2024-11-18 12:06:10.433485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.433656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.818 [2024-11-18 12:06:10.433704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.433844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.818 [2024-11-18 12:06:10.433879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.433976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.818 [2024-11-18 12:06:10.434010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.434124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.818 [2024-11-18 12:06:10.434160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.434301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.818 [2024-11-18 12:06:10.434335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.434483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.818 [2024-11-18 12:06:10.434524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.818 qpair failed and we were unable to recover it. 00:37:44.818 [2024-11-18 12:06:10.434686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.434720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.819 [2024-11-18 12:06:10.434856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.434893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.819 [2024-11-18 12:06:10.435023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.435057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.819 [2024-11-18 12:06:10.435248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.435303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.819 [2024-11-18 12:06:10.435444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.435480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.819 [2024-11-18 12:06:10.435593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.435628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.819 [2024-11-18 12:06:10.435732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.435766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.819 [2024-11-18 12:06:10.435924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.435958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.819 [2024-11-18 12:06:10.436092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.436127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.819 [2024-11-18 12:06:10.436236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.436274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.819 [2024-11-18 12:06:10.436420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.436455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.819 [2024-11-18 12:06:10.436626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.436662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.819 [2024-11-18 12:06:10.436801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.436835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.819 [2024-11-18 12:06:10.436982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.437034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.819 [2024-11-18 12:06:10.437227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.437278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.819 [2024-11-18 12:06:10.437418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.437455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.819 [2024-11-18 12:06:10.437586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.437622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.819 [2024-11-18 12:06:10.437738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.437774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.819 [2024-11-18 12:06:10.437912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.437951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.819 [2024-11-18 12:06:10.438168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.438229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.819 [2024-11-18 12:06:10.438352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.438390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.819 [2024-11-18 12:06:10.438571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.438606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.819 [2024-11-18 12:06:10.438743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.438794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.819 [2024-11-18 12:06:10.439001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.439067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.819 [2024-11-18 12:06:10.439282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.439339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.819 [2024-11-18 12:06:10.439471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.439516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.819 [2024-11-18 12:06:10.439627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.439663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.819 [2024-11-18 12:06:10.439848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.439882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.819 [2024-11-18 12:06:10.440020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.440054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.819 [2024-11-18 12:06:10.440311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.440366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.819 [2024-11-18 12:06:10.440534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.440588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.819 [2024-11-18 12:06:10.440759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.440815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.819 [2024-11-18 12:06:10.440974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.441027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.819 [2024-11-18 12:06:10.441243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.441310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.819 [2024-11-18 12:06:10.441443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.441483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.819 [2024-11-18 12:06:10.441621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.441684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.819 [2024-11-18 12:06:10.441912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.441947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.819 [2024-11-18 12:06:10.442106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.819 [2024-11-18 12:06:10.442157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.819 qpair failed and we were unable to recover it. 00:37:44.820 [2024-11-18 12:06:10.442402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.820 [2024-11-18 12:06:10.442436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.820 qpair failed and we were unable to recover it. 00:37:44.820 [2024-11-18 12:06:10.442586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.820 [2024-11-18 12:06:10.442620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.820 qpair failed and we were unable to recover it. 00:37:44.820 [2024-11-18 12:06:10.442761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.820 [2024-11-18 12:06:10.442798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.820 qpair failed and we were unable to recover it. 00:37:44.820 [2024-11-18 12:06:10.442955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.820 [2024-11-18 12:06:10.442993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.820 qpair failed and we were unable to recover it. 00:37:44.820 [2024-11-18 12:06:10.443195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.820 [2024-11-18 12:06:10.443232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.820 qpair failed and we were unable to recover it. 00:37:44.820 [2024-11-18 12:06:10.443408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.820 [2024-11-18 12:06:10.443444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.820 qpair failed and we were unable to recover it. 00:37:44.820 [2024-11-18 12:06:10.443631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.820 [2024-11-18 12:06:10.443665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.820 qpair failed and we were unable to recover it. 00:37:44.820 [2024-11-18 12:06:10.443806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.820 [2024-11-18 12:06:10.443854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.820 qpair failed and we were unable to recover it. 00:37:44.820 [2024-11-18 12:06:10.444022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.820 [2024-11-18 12:06:10.444076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.820 qpair failed and we were unable to recover it. 00:37:44.820 [2024-11-18 12:06:10.444232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.820 [2024-11-18 12:06:10.444273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.820 qpair failed and we were unable to recover it. 00:37:44.820 [2024-11-18 12:06:10.444460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.820 [2024-11-18 12:06:10.444510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.820 qpair failed and we were unable to recover it. 00:37:44.820 [2024-11-18 12:06:10.444638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.820 [2024-11-18 12:06:10.444672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.820 qpair failed and we were unable to recover it. 00:37:44.820 [2024-11-18 12:06:10.444810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.820 [2024-11-18 12:06:10.444844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.820 qpair failed and we were unable to recover it. 00:37:44.820 [2024-11-18 12:06:10.444976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.820 [2024-11-18 12:06:10.445013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.820 qpair failed and we were unable to recover it. 00:37:44.820 [2024-11-18 12:06:10.445160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.820 [2024-11-18 12:06:10.445198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.820 qpair failed and we were unable to recover it. 00:37:44.820 [2024-11-18 12:06:10.445335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.820 [2024-11-18 12:06:10.445372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.820 qpair failed and we were unable to recover it. 00:37:44.820 [2024-11-18 12:06:10.445558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.820 [2024-11-18 12:06:10.445617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.820 qpair failed and we were unable to recover it. 00:37:44.820 [2024-11-18 12:06:10.445787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.820 [2024-11-18 12:06:10.445824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.820 qpair failed and we were unable to recover it. 00:37:44.820 [2024-11-18 12:06:10.445961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.820 [2024-11-18 12:06:10.446000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.820 qpair failed and we were unable to recover it. 00:37:44.820 [2024-11-18 12:06:10.446149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.820 [2024-11-18 12:06:10.446189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.820 qpair failed and we were unable to recover it. 00:37:44.820 [2024-11-18 12:06:10.446339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.820 [2024-11-18 12:06:10.446376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.820 qpair failed and we were unable to recover it. 00:37:44.820 [2024-11-18 12:06:10.446532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.820 [2024-11-18 12:06:10.446566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.820 qpair failed and we were unable to recover it. 00:37:44.820 [2024-11-18 12:06:10.446700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.820 [2024-11-18 12:06:10.446734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.820 qpair failed and we were unable to recover it. 00:37:44.820 [2024-11-18 12:06:10.446860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.820 [2024-11-18 12:06:10.446894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.820 qpair failed and we were unable to recover it. 00:37:44.820 [2024-11-18 12:06:10.446995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.820 [2024-11-18 12:06:10.447028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.820 qpair failed and we were unable to recover it. 00:37:44.820 [2024-11-18 12:06:10.447210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.820 [2024-11-18 12:06:10.447249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.820 qpair failed and we were unable to recover it. 00:37:44.820 [2024-11-18 12:06:10.447418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.820 [2024-11-18 12:06:10.447485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.820 qpair failed and we were unable to recover it. 00:37:44.820 [2024-11-18 12:06:10.447603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.820 [2024-11-18 12:06:10.447639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.820 qpair failed and we were unable to recover it. 00:37:44.820 [2024-11-18 12:06:10.447781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.820 [2024-11-18 12:06:10.447815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.820 qpair failed and we were unable to recover it. 00:37:44.820 [2024-11-18 12:06:10.447983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.820 [2024-11-18 12:06:10.448017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.820 qpair failed and we were unable to recover it. 00:37:44.820 [2024-11-18 12:06:10.448123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.820 [2024-11-18 12:06:10.448157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.820 qpair failed and we were unable to recover it. 00:37:44.820 [2024-11-18 12:06:10.448309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.448347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.821 [2024-11-18 12:06:10.448456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.448511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.821 [2024-11-18 12:06:10.448660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.448694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.821 [2024-11-18 12:06:10.448854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.448891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.821 [2024-11-18 12:06:10.449025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.449088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.821 [2024-11-18 12:06:10.449254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.449288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.821 [2024-11-18 12:06:10.449392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.449427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.821 [2024-11-18 12:06:10.449595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.449648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.821 [2024-11-18 12:06:10.449784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.449848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.821 [2024-11-18 12:06:10.450048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.450107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.821 [2024-11-18 12:06:10.450267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.450321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.821 [2024-11-18 12:06:10.450479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.450527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.821 [2024-11-18 12:06:10.450650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.450701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.821 [2024-11-18 12:06:10.450822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.450856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.821 [2024-11-18 12:06:10.451000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.451037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.821 [2024-11-18 12:06:10.451247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.451282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.821 [2024-11-18 12:06:10.451409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.451442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.821 [2024-11-18 12:06:10.451593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.451627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.821 [2024-11-18 12:06:10.451797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.451830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.821 [2024-11-18 12:06:10.451970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.452003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.821 [2024-11-18 12:06:10.452183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.452220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.821 [2024-11-18 12:06:10.452381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.452417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.821 [2024-11-18 12:06:10.452539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.452574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.821 [2024-11-18 12:06:10.452678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.452713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.821 [2024-11-18 12:06:10.452836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.452890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.821 [2024-11-18 12:06:10.453065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.453108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.821 [2024-11-18 12:06:10.453216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.453250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.821 [2024-11-18 12:06:10.453425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.453460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.821 [2024-11-18 12:06:10.453601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.453649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.821 [2024-11-18 12:06:10.453774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.453811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.821 [2024-11-18 12:06:10.454014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.454048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.821 [2024-11-18 12:06:10.454206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.454258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.821 [2024-11-18 12:06:10.454376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.454414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.821 [2024-11-18 12:06:10.454574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.454610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.821 [2024-11-18 12:06:10.454761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.454796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.821 [2024-11-18 12:06:10.454935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.454972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.821 [2024-11-18 12:06:10.455122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.455159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.821 [2024-11-18 12:06:10.455319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.821 [2024-11-18 12:06:10.455357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.821 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.455517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.455551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.822 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.455703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.455772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.822 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.455985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.456039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.822 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.456258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.456297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.822 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.456437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.456472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.822 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.456617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.456660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.822 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.456779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.456817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.822 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.457013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.457058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.822 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.457196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.457235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.822 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.457405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.457439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.822 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.457583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.457619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.822 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.457773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.457826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.822 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.457939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.457972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.822 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.458135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.458169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.822 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.458273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.458306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.822 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.458415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.458449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.822 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.458597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.458637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.822 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.458815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.458868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.822 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.459009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.459049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.822 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.459193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.459231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.822 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.459393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.459428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.822 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.459564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.459608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.822 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.459750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.459803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.822 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.459944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.459982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.822 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.460103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.460140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.822 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.460316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.460370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.822 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.460549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.460584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.822 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.460693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.460727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.822 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.460876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.460909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.822 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.461047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.461080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.822 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.461202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.461251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.822 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.461407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.461442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.822 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.461569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.461617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.822 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.461767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.461807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.822 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.461922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.461957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.822 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.462082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.462123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.822 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.462283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.462359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.822 qpair failed and we were unable to recover it. 00:37:44.822 [2024-11-18 12:06:10.462506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.822 [2024-11-18 12:06:10.462540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.823 [2024-11-18 12:06:10.462671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.823 [2024-11-18 12:06:10.462723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.823 [2024-11-18 12:06:10.462896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.823 [2024-11-18 12:06:10.462946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.823 [2024-11-18 12:06:10.463056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.823 [2024-11-18 12:06:10.463090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.823 [2024-11-18 12:06:10.463221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.823 [2024-11-18 12:06:10.463254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.823 [2024-11-18 12:06:10.463394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.823 [2024-11-18 12:06:10.463427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.823 [2024-11-18 12:06:10.463566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.823 [2024-11-18 12:06:10.463621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.823 [2024-11-18 12:06:10.463768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.823 [2024-11-18 12:06:10.463806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.823 [2024-11-18 12:06:10.463937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.823 [2024-11-18 12:06:10.463971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.823 [2024-11-18 12:06:10.464124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.823 [2024-11-18 12:06:10.464160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.823 [2024-11-18 12:06:10.464328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.823 [2024-11-18 12:06:10.464362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.823 [2024-11-18 12:06:10.464507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.823 [2024-11-18 12:06:10.464543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.823 [2024-11-18 12:06:10.464730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.823 [2024-11-18 12:06:10.464784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.823 [2024-11-18 12:06:10.464930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.823 [2024-11-18 12:06:10.464983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.823 [2024-11-18 12:06:10.465179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.823 [2024-11-18 12:06:10.465224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.823 [2024-11-18 12:06:10.465330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.823 [2024-11-18 12:06:10.465364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.823 [2024-11-18 12:06:10.465532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.823 [2024-11-18 12:06:10.465598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.823 [2024-11-18 12:06:10.465751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.823 [2024-11-18 12:06:10.465805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.823 [2024-11-18 12:06:10.466036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.823 [2024-11-18 12:06:10.466092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.823 [2024-11-18 12:06:10.466256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.823 [2024-11-18 12:06:10.466314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.823 [2024-11-18 12:06:10.466467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.823 [2024-11-18 12:06:10.466516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.823 [2024-11-18 12:06:10.466653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.823 [2024-11-18 12:06:10.466700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.823 [2024-11-18 12:06:10.466840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.823 [2024-11-18 12:06:10.466877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.823 [2024-11-18 12:06:10.467020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.823 [2024-11-18 12:06:10.467058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.823 [2024-11-18 12:06:10.467213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.823 [2024-11-18 12:06:10.467251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.823 [2024-11-18 12:06:10.467436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.823 [2024-11-18 12:06:10.467470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.823 [2024-11-18 12:06:10.467586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.823 [2024-11-18 12:06:10.467620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.823 [2024-11-18 12:06:10.467754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.823 [2024-11-18 12:06:10.467790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.823 [2024-11-18 12:06:10.467919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.823 [2024-11-18 12:06:10.467970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.823 [2024-11-18 12:06:10.468069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.823 [2024-11-18 12:06:10.468102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.823 [2024-11-18 12:06:10.468256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.823 [2024-11-18 12:06:10.468290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.823 [2024-11-18 12:06:10.468403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.823 [2024-11-18 12:06:10.468439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.823 [2024-11-18 12:06:10.468624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.823 [2024-11-18 12:06:10.468672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.823 [2024-11-18 12:06:10.468836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.823 [2024-11-18 12:06:10.468875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.823 [2024-11-18 12:06:10.469054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.823 [2024-11-18 12:06:10.469092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.823 [2024-11-18 12:06:10.469329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.823 [2024-11-18 12:06:10.469367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.823 [2024-11-18 12:06:10.469532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.823 [2024-11-18 12:06:10.469577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.823 [2024-11-18 12:06:10.469687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.823 [2024-11-18 12:06:10.469722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.823 qpair failed and we were unable to recover it. 00:37:44.824 [2024-11-18 12:06:10.469901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.824 [2024-11-18 12:06:10.469954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.824 qpair failed and we were unable to recover it. 00:37:44.824 [2024-11-18 12:06:10.470161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.824 [2024-11-18 12:06:10.470216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.824 qpair failed and we were unable to recover it. 00:37:44.824 [2024-11-18 12:06:10.470335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.824 [2024-11-18 12:06:10.470373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.824 qpair failed and we were unable to recover it. 00:37:44.824 [2024-11-18 12:06:10.470509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.824 [2024-11-18 12:06:10.470546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.824 qpair failed and we were unable to recover it. 00:37:44.824 [2024-11-18 12:06:10.470654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.824 [2024-11-18 12:06:10.470696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.824 qpair failed and we were unable to recover it. 00:37:44.824 [2024-11-18 12:06:10.470837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.824 [2024-11-18 12:06:10.470872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.824 qpair failed and we were unable to recover it. 00:37:44.824 [2024-11-18 12:06:10.470990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.824 [2024-11-18 12:06:10.471024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.824 qpair failed and we were unable to recover it. 00:37:44.824 [2024-11-18 12:06:10.471153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.824 [2024-11-18 12:06:10.471191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.824 qpair failed and we were unable to recover it. 00:37:44.824 [2024-11-18 12:06:10.471348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.824 [2024-11-18 12:06:10.471391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.824 qpair failed and we were unable to recover it. 00:37:44.824 [2024-11-18 12:06:10.471573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.824 [2024-11-18 12:06:10.471608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.824 qpair failed and we were unable to recover it. 00:37:44.824 [2024-11-18 12:06:10.471711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.824 [2024-11-18 12:06:10.471744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.824 qpair failed and we were unable to recover it. 00:37:44.824 [2024-11-18 12:06:10.471891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.824 [2024-11-18 12:06:10.471929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.824 qpair failed and we were unable to recover it. 00:37:44.824 [2024-11-18 12:06:10.472040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.824 [2024-11-18 12:06:10.472077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.824 qpair failed and we were unable to recover it. 00:37:44.824 [2024-11-18 12:06:10.472237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.824 [2024-11-18 12:06:10.472270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.824 qpair failed and we were unable to recover it. 00:37:44.824 [2024-11-18 12:06:10.472381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.824 [2024-11-18 12:06:10.472415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.824 qpair failed and we were unable to recover it. 00:37:44.824 [2024-11-18 12:06:10.472568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.824 [2024-11-18 12:06:10.472602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.824 qpair failed and we were unable to recover it. 00:37:44.824 [2024-11-18 12:06:10.472748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.824 [2024-11-18 12:06:10.472810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.824 qpair failed and we were unable to recover it. 00:37:44.824 [2024-11-18 12:06:10.472938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.824 [2024-11-18 12:06:10.472979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.824 qpair failed and we were unable to recover it. 00:37:44.824 [2024-11-18 12:06:10.473111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.824 [2024-11-18 12:06:10.473149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.824 qpair failed and we were unable to recover it. 00:37:44.824 [2024-11-18 12:06:10.473295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.824 [2024-11-18 12:06:10.473332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.824 qpair failed and we were unable to recover it. 00:37:44.824 [2024-11-18 12:06:10.473522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.824 [2024-11-18 12:06:10.473566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.824 qpair failed and we were unable to recover it. 00:37:44.824 [2024-11-18 12:06:10.473665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.824 [2024-11-18 12:06:10.473710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.824 qpair failed and we were unable to recover it. 00:37:44.824 [2024-11-18 12:06:10.473901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.824 [2024-11-18 12:06:10.473940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.824 qpair failed and we were unable to recover it. 00:37:44.824 [2024-11-18 12:06:10.474056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.824 [2024-11-18 12:06:10.474094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.824 qpair failed and we were unable to recover it. 00:37:44.824 [2024-11-18 12:06:10.474240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.824 [2024-11-18 12:06:10.474278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.824 qpair failed and we were unable to recover it. 00:37:44.824 [2024-11-18 12:06:10.474426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.824 [2024-11-18 12:06:10.474463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.824 qpair failed and we were unable to recover it. 00:37:44.824 [2024-11-18 12:06:10.474679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.824 [2024-11-18 12:06:10.474717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.824 qpair failed and we were unable to recover it. 00:37:44.824 [2024-11-18 12:06:10.474924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.824 [2024-11-18 12:06:10.474980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.824 qpair failed and we were unable to recover it. 00:37:44.824 [2024-11-18 12:06:10.475141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.824 [2024-11-18 12:06:10.475199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.824 qpair failed and we were unable to recover it. 00:37:44.824 [2024-11-18 12:06:10.475340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.824 [2024-11-18 12:06:10.475377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.824 qpair failed and we were unable to recover it. 00:37:44.824 [2024-11-18 12:06:10.475568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.824 [2024-11-18 12:06:10.475616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.824 qpair failed and we were unable to recover it. 00:37:44.824 [2024-11-18 12:06:10.475738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.824 [2024-11-18 12:06:10.475775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.824 qpair failed and we were unable to recover it. 00:37:44.824 [2024-11-18 12:06:10.475934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.824 [2024-11-18 12:06:10.475988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.824 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.476093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.825 [2024-11-18 12:06:10.476128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.825 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.476304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.825 [2024-11-18 12:06:10.476352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.825 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.476486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.825 [2024-11-18 12:06:10.476542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.825 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.476688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.825 [2024-11-18 12:06:10.476723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.825 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.476898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.825 [2024-11-18 12:06:10.476932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.825 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.477062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.825 [2024-11-18 12:06:10.477095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.825 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.477198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.825 [2024-11-18 12:06:10.477231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.825 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.477375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.825 [2024-11-18 12:06:10.477412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.825 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.477592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.825 [2024-11-18 12:06:10.477657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.825 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.477820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.825 [2024-11-18 12:06:10.477886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.825 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.478096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.825 [2024-11-18 12:06:10.478157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.825 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.478394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.825 [2024-11-18 12:06:10.478433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.825 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.478631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.825 [2024-11-18 12:06:10.478666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.825 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.478835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.825 [2024-11-18 12:06:10.478883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.825 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.479151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.825 [2024-11-18 12:06:10.479214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.825 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.479356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.825 [2024-11-18 12:06:10.479389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.825 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.479535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.825 [2024-11-18 12:06:10.479569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.825 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.479670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.825 [2024-11-18 12:06:10.479709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.825 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.479878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.825 [2024-11-18 12:06:10.479911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.825 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.480066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.825 [2024-11-18 12:06:10.480121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.825 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.480320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.825 [2024-11-18 12:06:10.480359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.825 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.480537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.825 [2024-11-18 12:06:10.480572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.825 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.480710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.825 [2024-11-18 12:06:10.480743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.825 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.480895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.825 [2024-11-18 12:06:10.480949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.825 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.481102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.825 [2024-11-18 12:06:10.481139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.825 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.481280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.825 [2024-11-18 12:06:10.481331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.825 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.481483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.825 [2024-11-18 12:06:10.481522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.825 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.481680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.825 [2024-11-18 12:06:10.481714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.825 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.481896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.825 [2024-11-18 12:06:10.481949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.825 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.482115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.825 [2024-11-18 12:06:10.482155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.825 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.482312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.825 [2024-11-18 12:06:10.482351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.825 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.482499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.825 [2024-11-18 12:06:10.482534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.825 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.482674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.825 [2024-11-18 12:06:10.482708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.825 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.482860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.825 [2024-11-18 12:06:10.482895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.825 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.483014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.825 [2024-11-18 12:06:10.483048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.825 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.483293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.825 [2024-11-18 12:06:10.483330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.825 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.483450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.825 [2024-11-18 12:06:10.483504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.825 qpair failed and we were unable to recover it. 00:37:44.825 [2024-11-18 12:06:10.483680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.483713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.826 [2024-11-18 12:06:10.483850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.483883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.826 [2024-11-18 12:06:10.484066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.484122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.826 [2024-11-18 12:06:10.484237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.484275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.826 [2024-11-18 12:06:10.484459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.484509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.826 [2024-11-18 12:06:10.484622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.484660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.826 [2024-11-18 12:06:10.484780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.484816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.826 [2024-11-18 12:06:10.484987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.485047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.826 [2024-11-18 12:06:10.485216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.485272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.826 [2024-11-18 12:06:10.485434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.485469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.826 [2024-11-18 12:06:10.485588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.485623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.826 [2024-11-18 12:06:10.485738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.485780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.826 [2024-11-18 12:06:10.485923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.485962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.826 [2024-11-18 12:06:10.486128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.486184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.826 [2024-11-18 12:06:10.486315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.486352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.826 [2024-11-18 12:06:10.486520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.486585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.826 [2024-11-18 12:06:10.486734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.486780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.826 [2024-11-18 12:06:10.486925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.486959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.826 [2024-11-18 12:06:10.487128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.487178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.826 [2024-11-18 12:06:10.487410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.487448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.826 [2024-11-18 12:06:10.487650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.487684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.826 [2024-11-18 12:06:10.487868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.487907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.826 [2024-11-18 12:06:10.488125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.488183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.826 [2024-11-18 12:06:10.488339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.488377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.826 [2024-11-18 12:06:10.488534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.488585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.826 [2024-11-18 12:06:10.488724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.488757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.826 [2024-11-18 12:06:10.488879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.488912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.826 [2024-11-18 12:06:10.489108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.489145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.826 [2024-11-18 12:06:10.489322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.489359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.826 [2024-11-18 12:06:10.489484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.489546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.826 [2024-11-18 12:06:10.489649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.489682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.826 [2024-11-18 12:06:10.489852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.489890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.826 [2024-11-18 12:06:10.490061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.490119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.826 [2024-11-18 12:06:10.490315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.490355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.826 [2024-11-18 12:06:10.490507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.490562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.826 [2024-11-18 12:06:10.490725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.490791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.826 [2024-11-18 12:06:10.490944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.491001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.826 [2024-11-18 12:06:10.491215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.826 [2024-11-18 12:06:10.491270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.826 qpair failed and we were unable to recover it. 00:37:44.827 [2024-11-18 12:06:10.491398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.827 [2024-11-18 12:06:10.491448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.827 qpair failed and we were unable to recover it. 00:37:44.827 [2024-11-18 12:06:10.491570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.827 [2024-11-18 12:06:10.491604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.827 qpair failed and we were unable to recover it. 00:37:44.827 [2024-11-18 12:06:10.491774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.827 [2024-11-18 12:06:10.491807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.827 qpair failed and we were unable to recover it. 00:37:44.827 [2024-11-18 12:06:10.491918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.827 [2024-11-18 12:06:10.491951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.827 qpair failed and we were unable to recover it. 00:37:44.827 [2024-11-18 12:06:10.492159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.827 [2024-11-18 12:06:10.492193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.827 qpair failed and we were unable to recover it. 00:37:44.827 [2024-11-18 12:06:10.492323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.827 [2024-11-18 12:06:10.492356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.827 qpair failed and we were unable to recover it. 00:37:44.827 [2024-11-18 12:06:10.492469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.827 [2024-11-18 12:06:10.492509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.827 qpair failed and we were unable to recover it. 00:37:44.827 [2024-11-18 12:06:10.492658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.827 [2024-11-18 12:06:10.492716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.827 qpair failed and we were unable to recover it. 00:37:44.827 [2024-11-18 12:06:10.492873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.827 [2024-11-18 12:06:10.492910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.827 qpair failed and we were unable to recover it. 00:37:44.827 [2024-11-18 12:06:10.493024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.827 [2024-11-18 12:06:10.493059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.827 qpair failed and we were unable to recover it. 00:37:44.827 [2024-11-18 12:06:10.493246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.827 [2024-11-18 12:06:10.493281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.827 qpair failed and we were unable to recover it. 00:37:44.827 [2024-11-18 12:06:10.493382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.827 [2024-11-18 12:06:10.493415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.827 qpair failed and we were unable to recover it. 00:37:44.827 [2024-11-18 12:06:10.493632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.827 [2024-11-18 12:06:10.493679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.827 qpair failed and we were unable to recover it. 00:37:44.827 [2024-11-18 12:06:10.493792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.827 [2024-11-18 12:06:10.493827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.827 qpair failed and we were unable to recover it. 00:37:44.827 [2024-11-18 12:06:10.494001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.827 [2024-11-18 12:06:10.494034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.827 qpair failed and we were unable to recover it. 00:37:44.827 [2024-11-18 12:06:10.494191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.827 [2024-11-18 12:06:10.494224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.827 qpair failed and we were unable to recover it. 00:37:44.827 [2024-11-18 12:06:10.494356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.827 [2024-11-18 12:06:10.494390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.827 qpair failed and we were unable to recover it. 00:37:44.827 [2024-11-18 12:06:10.494566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.827 [2024-11-18 12:06:10.494600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.827 qpair failed and we were unable to recover it. 00:37:44.827 [2024-11-18 12:06:10.494730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.827 [2024-11-18 12:06:10.494783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.827 qpair failed and we were unable to recover it. 00:37:44.827 [2024-11-18 12:06:10.494957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.827 [2024-11-18 12:06:10.495014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.827 qpair failed and we were unable to recover it. 00:37:44.827 [2024-11-18 12:06:10.495150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.827 [2024-11-18 12:06:10.495184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.827 qpair failed and we were unable to recover it. 00:37:44.827 [2024-11-18 12:06:10.495366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.827 [2024-11-18 12:06:10.495404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.827 qpair failed and we were unable to recover it. 00:37:44.827 [2024-11-18 12:06:10.495612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.827 [2024-11-18 12:06:10.495661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.827 qpair failed and we were unable to recover it. 00:37:44.827 [2024-11-18 12:06:10.495778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.827 [2024-11-18 12:06:10.495814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.827 qpair failed and we were unable to recover it. 00:37:44.827 [2024-11-18 12:06:10.495950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.827 [2024-11-18 12:06:10.495984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.827 qpair failed and we were unable to recover it. 00:37:44.827 [2024-11-18 12:06:10.496117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.827 [2024-11-18 12:06:10.496151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.827 qpair failed and we were unable to recover it. 00:37:44.827 [2024-11-18 12:06:10.496295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.827 [2024-11-18 12:06:10.496329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.827 qpair failed and we were unable to recover it. 00:37:44.827 [2024-11-18 12:06:10.496486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.827 [2024-11-18 12:06:10.496576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.827 qpair failed and we were unable to recover it. 00:37:44.827 [2024-11-18 12:06:10.496720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.827 [2024-11-18 12:06:10.496754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.827 qpair failed and we were unable to recover it. 00:37:44.827 [2024-11-18 12:06:10.496870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.827 [2024-11-18 12:06:10.496904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.827 qpair failed and we were unable to recover it. 00:37:44.827 [2024-11-18 12:06:10.497018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.827 [2024-11-18 12:06:10.497052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.827 qpair failed and we were unable to recover it. 00:37:44.827 [2024-11-18 12:06:10.497252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.827 [2024-11-18 12:06:10.497285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.827 qpair failed and we were unable to recover it. 00:37:44.827 [2024-11-18 12:06:10.497418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.827 [2024-11-18 12:06:10.497451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.827 qpair failed and we were unable to recover it. 00:37:44.827 [2024-11-18 12:06:10.497621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.827 [2024-11-18 12:06:10.497657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.827 qpair failed and we were unable to recover it. 00:37:44.827 [2024-11-18 12:06:10.497777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.827 [2024-11-18 12:06:10.497834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.827 qpair failed and we were unable to recover it. 00:37:44.827 [2024-11-18 12:06:10.497997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.827 [2024-11-18 12:06:10.498033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.827 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.498171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-18 12:06:10.498205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.498347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-18 12:06:10.498380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.498488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-18 12:06:10.498530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.498665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-18 12:06:10.498698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.498817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-18 12:06:10.498853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.498987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-18 12:06:10.499021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.499145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-18 12:06:10.499179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.499311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-18 12:06:10.499346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.499509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-18 12:06:10.499544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.499675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-18 12:06:10.499709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.499824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-18 12:06:10.499859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.500025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-18 12:06:10.500064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.500219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-18 12:06:10.500270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.500378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-18 12:06:10.500412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.500546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-18 12:06:10.500580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.500748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-18 12:06:10.500781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.500890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-18 12:06:10.500924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.501064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-18 12:06:10.501098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.501208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-18 12:06:10.501242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.501401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-18 12:06:10.501438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.501578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-18 12:06:10.501612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.501776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-18 12:06:10.501811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.501966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-18 12:06:10.502003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.502162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-18 12:06:10.502196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.502311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-18 12:06:10.502362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.502484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-18 12:06:10.502527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.502653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-18 12:06:10.502687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.502795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-18 12:06:10.502829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.502983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-18 12:06:10.503021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.503171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-18 12:06:10.503204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.503313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-18 12:06:10.503347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.503487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-18 12:06:10.503537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.503674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-18 12:06:10.503708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.503835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-18 12:06:10.503868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.503977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-18 12:06:10.504011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.504175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-18 12:06:10.504209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.504317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-18 12:06:10.504368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-18 12:06:10.504533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-18 12:06:10.504567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-18 12:06:10.504679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-18 12:06:10.504713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-18 12:06:10.504891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-18 12:06:10.504929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-18 12:06:10.505099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-18 12:06:10.505136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-18 12:06:10.505271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-18 12:06:10.505305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-18 12:06:10.505413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-18 12:06:10.505452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-18 12:06:10.505630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-18 12:06:10.505664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-18 12:06:10.505789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-18 12:06:10.505823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-18 12:06:10.505951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-18 12:06:10.505984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-18 12:06:10.506090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-18 12:06:10.506123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-18 12:06:10.506262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-18 12:06:10.506296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-18 12:06:10.506439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-18 12:06:10.506487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-18 12:06:10.506636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-18 12:06:10.506675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-18 12:06:10.506806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-18 12:06:10.506842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-18 12:06:10.506974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-18 12:06:10.507017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-18 12:06:10.507236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-18 12:06:10.507292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-18 12:06:10.507453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-18 12:06:10.507487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-18 12:06:10.507653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-18 12:06:10.507689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-18 12:06:10.507793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-18 12:06:10.507827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-18 12:06:10.507984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-18 12:06:10.508017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-18 12:06:10.508150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-18 12:06:10.508211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-18 12:06:10.508366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-18 12:06:10.508416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-18 12:06:10.508581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-18 12:06:10.508615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-18 12:06:10.508730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-18 12:06:10.508764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-18 12:06:10.508923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-18 12:06:10.508960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-18 12:06:10.509084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-18 12:06:10.509117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-18 12:06:10.509249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-18 12:06:10.509282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-18 12:06:10.509420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-18 12:06:10.509471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-18 12:06:10.509634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-18 12:06:10.509668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-18 12:06:10.509779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-18 12:06:10.509829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-18 12:06:10.509980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-18 12:06:10.510017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-18 12:06:10.510177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-18 12:06:10.510210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.510350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-18 12:06:10.510385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.510577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-18 12:06:10.510612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.510769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-18 12:06:10.510810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.510946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-18 12:06:10.510979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.511096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-18 12:06:10.511130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.511298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-18 12:06:10.511332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.511498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-18 12:06:10.511532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.511630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-18 12:06:10.511663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.511769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-18 12:06:10.511803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.511950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-18 12:06:10.511984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.512115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-18 12:06:10.512152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.512306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-18 12:06:10.512340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.512448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-18 12:06:10.512481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.512622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-18 12:06:10.512656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.512755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-18 12:06:10.512789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.512919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-18 12:06:10.512952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.513125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-18 12:06:10.513158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.513290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-18 12:06:10.513324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.513426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-18 12:06:10.513459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.513564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-18 12:06:10.513598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.513719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-18 12:06:10.513753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.513856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-18 12:06:10.513889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.514057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-18 12:06:10.514112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.514298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-18 12:06:10.514331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.514442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-18 12:06:10.514482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.514631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-18 12:06:10.514664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.514793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-18 12:06:10.514827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.514936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-18 12:06:10.514969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.515081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-18 12:06:10.515114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.515269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-18 12:06:10.515316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.515480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-18 12:06:10.515548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.515683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-18 12:06:10.515719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.515859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-18 12:06:10.515894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.516099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-18 12:06:10.516152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.516323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-18 12:06:10.516363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.516512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-18 12:06:10.516567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.516704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-18 12:06:10.516737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-18 12:06:10.516895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.516932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-18 12:06:10.517063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.517113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-18 12:06:10.517257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.517294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-18 12:06:10.517409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.517446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-18 12:06:10.517633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.517667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-18 12:06:10.517806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.517842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-18 12:06:10.518057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.518091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-18 12:06:10.518240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.518310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-18 12:06:10.518480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.518525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-18 12:06:10.518648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.518700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-18 12:06:10.518889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.518946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-18 12:06:10.519109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.519164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-18 12:06:10.519307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.519344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-18 12:06:10.519464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.519506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-18 12:06:10.519622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.519657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-18 12:06:10.519803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.519853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-18 12:06:10.519993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.520026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-18 12:06:10.520215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.520282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-18 12:06:10.520408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.520448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-18 12:06:10.520590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.520625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-18 12:06:10.520749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.520788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-18 12:06:10.520966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.521004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-18 12:06:10.521179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.521232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-18 12:06:10.521396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.521431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-18 12:06:10.521583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.521623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-18 12:06:10.521765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.521823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-18 12:06:10.521954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.522010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-18 12:06:10.522142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.522176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-18 12:06:10.522313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.522348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-18 12:06:10.522507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.522556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-18 12:06:10.522682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.522719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-18 12:06:10.522891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.522925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-18 12:06:10.523065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.523099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-18 12:06:10.523225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.523259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-18 12:06:10.523416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.523466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-18 12:06:10.523601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.523648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-18 12:06:10.523790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.523845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-18 12:06:10.524006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-18 12:06:10.524045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-18 12:06:10.524166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-18 12:06:10.524205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-18 12:06:10.524366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-18 12:06:10.524400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-18 12:06:10.524544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-18 12:06:10.524580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-18 12:06:10.524687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-18 12:06:10.524740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-18 12:06:10.524861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-18 12:06:10.524898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-18 12:06:10.525018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-18 12:06:10.525055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-18 12:06:10.525266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-18 12:06:10.525299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-18 12:06:10.525465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-18 12:06:10.525524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-18 12:06:10.525649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-18 12:06:10.525683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-18 12:06:10.525872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-18 12:06:10.525933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-18 12:06:10.526093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-18 12:06:10.526146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-18 12:06:10.526277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-18 12:06:10.526311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-18 12:06:10.526422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-18 12:06:10.526456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-18 12:06:10.526618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-18 12:06:10.526669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-18 12:06:10.526847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-18 12:06:10.526882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-18 12:06:10.527028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-18 12:06:10.527061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-18 12:06:10.527166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-18 12:06:10.527217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-18 12:06:10.527351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-18 12:06:10.527385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-18 12:06:10.527518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-18 12:06:10.527566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-18 12:06:10.527733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-18 12:06:10.527785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-18 12:06:10.527958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-18 12:06:10.528018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-18 12:06:10.528161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-18 12:06:10.528199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-18 12:06:10.528377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-18 12:06:10.528415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-18 12:06:10.528576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-18 12:06:10.528610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-18 12:06:10.528788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-18 12:06:10.528827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-18 12:06:10.528967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-18 12:06:10.529004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-18 12:06:10.529205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-18 12:06:10.529270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-18 12:06:10.529412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-18 12:06:10.529454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-18 12:06:10.529580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-18 12:06:10.529616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-18 12:06:10.529766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-18 12:06:10.529821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-18 12:06:10.530003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-18 12:06:10.530054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-18 12:06:10.530206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-18 12:06:10.530243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-18 12:06:10.530381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-18 12:06:10.530420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-18 12:06:10.530589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-18 12:06:10.530624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-18 12:06:10.530758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-18 12:06:10.530809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-18 12:06:10.530943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-18 12:06:10.530987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-18 12:06:10.531187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.531245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-18 12:06:10.531373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.531411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-18 12:06:10.531540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.531574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-18 12:06:10.531727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.531778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-18 12:06:10.531906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.531940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-18 12:06:10.532100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.532137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-18 12:06:10.532313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.532349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-18 12:06:10.532500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.532557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-18 12:06:10.532698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.532731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-18 12:06:10.532894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.532931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-18 12:06:10.533095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.533148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-18 12:06:10.533318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.533355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-18 12:06:10.533505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.533555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-18 12:06:10.533719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.533753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-18 12:06:10.533885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.533917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-18 12:06:10.534071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.534108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-18 12:06:10.534280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.534317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-18 12:06:10.534505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.534553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-18 12:06:10.534685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.534732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-18 12:06:10.534882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.534937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-18 12:06:10.535147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.535182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-18 12:06:10.535320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.535355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-18 12:06:10.535460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.535503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-18 12:06:10.535648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.535684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-18 12:06:10.535821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.535855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-18 12:06:10.535990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.536023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-18 12:06:10.536124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.536158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-18 12:06:10.536336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.536369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-18 12:06:10.536503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.536537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-18 12:06:10.536672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.536710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-18 12:06:10.536853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.536890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-18 12:06:10.537007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.537049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-18 12:06:10.537202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.537239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-18 12:06:10.537412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.537450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-18 12:06:10.537609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.537643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-18 12:06:10.537760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.537793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-18 12:06:10.537921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.537958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-18 12:06:10.538115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-18 12:06:10.538149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-18 12:06:10.538291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-18 12:06:10.538328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-18 12:06:10.538508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-18 12:06:10.538543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-18 12:06:10.538688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-18 12:06:10.538740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-18 12:06:10.538921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-18 12:06:10.538975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-18 12:06:10.539079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-18 12:06:10.539113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-18 12:06:10.539255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-18 12:06:10.539290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-18 12:06:10.539421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-18 12:06:10.539455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-18 12:06:10.539653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-18 12:06:10.539706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-18 12:06:10.539875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-18 12:06:10.539914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-18 12:06:10.540075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-18 12:06:10.540110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-18 12:06:10.540221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-18 12:06:10.540255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-18 12:06:10.540378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-18 12:06:10.540415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-18 12:06:10.540607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-18 12:06:10.540641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-18 12:06:10.540757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-18 12:06:10.540793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-18 12:06:10.540927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-18 12:06:10.540961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-18 12:06:10.541084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-18 12:06:10.541121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-18 12:06:10.541365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-18 12:06:10.541398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-18 12:06:10.541546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-18 12:06:10.541581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-18 12:06:10.541692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-18 12:06:10.541729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-18 12:06:10.541839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-18 12:06:10.541875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-18 12:06:10.542073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-18 12:06:10.542126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-18 12:06:10.542313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-18 12:06:10.542378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-18 12:06:10.542520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-18 12:06:10.542556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-18 12:06:10.542673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-18 12:06:10.542724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-18 12:06:10.542911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-18 12:06:10.542949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-18 12:06:10.543101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-18 12:06:10.543164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-18 12:06:10.543283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-18 12:06:10.543320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-18 12:06:10.543484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-18 12:06:10.543549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-18 12:06:10.543678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-18 12:06:10.543712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-18 12:06:10.543853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-18 12:06:10.543886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-18 12:06:10.544015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-18 12:06:10.544052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-18 12:06:10.544214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-18 12:06:10.544254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-18 12:06:10.544384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.544417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-18 12:06:10.544590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.544630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-18 12:06:10.544764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.544827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-18 12:06:10.544960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.545014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-18 12:06:10.545138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.545175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-18 12:06:10.545362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.545419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-18 12:06:10.545550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.545595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-18 12:06:10.545756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.545790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-18 12:06:10.545948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.545981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-18 12:06:10.546222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.546260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-18 12:06:10.546372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.546409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-18 12:06:10.546581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.546617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-18 12:06:10.546749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.546782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-18 12:06:10.546924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.546957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-18 12:06:10.547177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.547215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-18 12:06:10.547363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.547414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-18 12:06:10.547577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.547625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-18 12:06:10.547737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.547771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-18 12:06:10.547980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.548014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-18 12:06:10.548271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.548305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-18 12:06:10.548440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.548518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-18 12:06:10.548667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.548701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-18 12:06:10.548892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.548929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-18 12:06:10.549104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.549137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-18 12:06:10.549275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.549309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-18 12:06:10.549500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.549548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-18 12:06:10.549714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.549780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-18 12:06:10.549973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.550024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-18 12:06:10.550169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.550245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-18 12:06:10.550394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.550431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-18 12:06:10.550609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.550643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-18 12:06:10.550777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.550814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-18 12:06:10.550945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.550982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-18 12:06:10.551123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.551176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-18 12:06:10.551343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.551378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-18 12:06:10.551526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.551575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-18 12:06:10.551722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.551760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-18 12:06:10.551900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-18 12:06:10.551944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.552105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-18 12:06:10.552155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.552340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-18 12:06:10.552392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.552533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-18 12:06:10.552584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.552733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-18 12:06:10.552772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.552882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-18 12:06:10.552916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.553164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-18 12:06:10.553219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.553383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-18 12:06:10.553423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.553591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-18 12:06:10.553640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.553764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-18 12:06:10.553819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.553981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-18 12:06:10.554045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.554240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-18 12:06:10.554300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.554503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-18 12:06:10.554537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.554698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-18 12:06:10.554751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.554896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-18 12:06:10.554935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.555051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-18 12:06:10.555086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.555251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-18 12:06:10.555286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.555419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-18 12:06:10.555452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.555609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-18 12:06:10.555646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.555762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-18 12:06:10.555805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.555951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-18 12:06:10.555987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.556116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-18 12:06:10.556163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.556288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-18 12:06:10.556322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.556430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-18 12:06:10.556464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.556605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-18 12:06:10.556654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.556770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-18 12:06:10.556808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.556917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-18 12:06:10.556951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.557116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-18 12:06:10.557151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.557255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-18 12:06:10.557290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.557423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-18 12:06:10.557457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.557584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-18 12:06:10.557620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.557738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-18 12:06:10.557783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.557915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-18 12:06:10.557963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.558095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-18 12:06:10.558131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.558291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-18 12:06:10.558325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.558461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-18 12:06:10.558512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.558650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-18 12:06:10.558686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.558806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-18 12:06:10.558852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-18 12:06:10.558987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.559022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-18 12:06:10.559150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.559189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-18 12:06:10.559376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.559410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-18 12:06:10.559542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.559579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-18 12:06:10.559682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.559716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-18 12:06:10.559855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.559909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-18 12:06:10.560086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.560143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-18 12:06:10.560324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.560377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-18 12:06:10.560574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.560611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-18 12:06:10.560720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.560755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-18 12:06:10.560895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.560946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-18 12:06:10.561112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.561168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-18 12:06:10.561347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.561395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-18 12:06:10.561529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.561563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-18 12:06:10.561687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.561735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-18 12:06:10.561913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.561968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-18 12:06:10.562148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.562186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-18 12:06:10.562363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.562402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-18 12:06:10.562545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.562580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-18 12:06:10.562692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.562726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-18 12:06:10.562932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.562977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-18 12:06:10.563206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.563240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-18 12:06:10.563373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.563407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-18 12:06:10.563559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.563599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-18 12:06:10.563736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.563784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-18 12:06:10.563925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.563961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-18 12:06:10.564134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.564173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-18 12:06:10.564330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.564369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-18 12:06:10.564543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.564578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-18 12:06:10.564693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.564729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-18 12:06:10.564886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.564925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-18 12:06:10.565106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.565144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-18 12:06:10.565289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.565326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-18 12:06:10.565524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.565560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-18 12:06:10.565664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.565698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-18 12:06:10.565838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.565874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-18 12:06:10.566010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.566044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-18 12:06:10.566248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-18 12:06:10.566314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-18 12:06:10.566439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-18 12:06:10.566495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-18 12:06:10.566667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-18 12:06:10.566716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-18 12:06:10.566840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-18 12:06:10.566876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-18 12:06:10.567045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-18 12:06:10.567086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-18 12:06:10.567218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-18 12:06:10.567253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-18 12:06:10.567399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-18 12:06:10.567433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-18 12:06:10.567564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-18 12:06:10.567599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-18 12:06:10.567707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-18 12:06:10.567740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-18 12:06:10.567919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-18 12:06:10.567958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-18 12:06:10.568055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-18 12:06:10.568089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-18 12:06:10.568219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-18 12:06:10.568256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-18 12:06:10.568394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-18 12:06:10.568441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-18 12:06:10.568592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-18 12:06:10.568641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-18 12:06:10.568818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-18 12:06:10.568858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-18 12:06:10.569014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-18 12:06:10.569083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-18 12:06:10.569293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-18 12:06:10.569330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-18 12:06:10.569459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-18 12:06:10.569500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-18 12:06:10.569614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-18 12:06:10.569648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-18 12:06:10.569767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-18 12:06:10.569812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-18 12:06:10.569935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-18 12:06:10.569969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-18 12:06:10.570112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-18 12:06:10.570146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-18 12:06:10.570282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-18 12:06:10.570329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-18 12:06:10.570510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-18 12:06:10.570546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-18 12:06:10.570662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-18 12:06:10.570697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-18 12:06:10.570905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-18 12:06:10.570946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-18 12:06:10.571072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-18 12:06:10.571136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-18 12:06:10.571302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-18 12:06:10.571337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-18 12:06:10.571436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-18 12:06:10.571471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-18 12:06:10.571632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-18 12:06:10.571666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-18 12:06:10.571774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-18 12:06:10.571808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-18 12:06:10.571949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-18 12:06:10.571984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-18 12:06:10.572081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-18 12:06:10.572128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-18 12:06:10.572273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.572308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-18 12:06:10.572455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.572506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-18 12:06:10.572613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.572647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-18 12:06:10.572769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.572817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-18 12:06:10.572967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.573003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-18 12:06:10.573137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.573172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-18 12:06:10.573310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.573344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-18 12:06:10.573454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.573498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-18 12:06:10.573662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.573696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-18 12:06:10.573847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.573886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-18 12:06:10.574032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.574071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-18 12:06:10.574204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.574252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-18 12:06:10.574404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.574441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-18 12:06:10.574585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.574621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-18 12:06:10.574738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.574772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-18 12:06:10.574936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.574983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-18 12:06:10.575122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.575160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-18 12:06:10.575300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.575335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-18 12:06:10.575439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.575473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-18 12:06:10.575617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.575652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-18 12:06:10.575779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.575816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-18 12:06:10.575942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.575980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-18 12:06:10.576115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.576151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-18 12:06:10.576289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.576325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-18 12:06:10.576460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.576504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-18 12:06:10.576646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.576680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-18 12:06:10.576857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.576894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-18 12:06:10.577039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.577076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-18 12:06:10.577259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.577297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-18 12:06:10.577442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.577487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-18 12:06:10.577671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.577719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-18 12:06:10.577868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.577903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-18 12:06:10.578104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.578168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-18 12:06:10.578298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.578336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-18 12:06:10.578457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.578500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-18 12:06:10.578676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.578712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-18 12:06:10.578852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.578889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-18 12:06:10.579023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-18 12:06:10.579059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.579239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-18 12:06:10.579274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.579409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-18 12:06:10.579443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.579565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-18 12:06:10.579600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.579736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-18 12:06:10.579771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.579952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-18 12:06:10.580058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.580194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-18 12:06:10.580269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.580400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-18 12:06:10.580434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.580619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-18 12:06:10.580658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.580861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-18 12:06:10.580914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.581142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-18 12:06:10.581178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.581365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-18 12:06:10.581403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.581564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-18 12:06:10.581599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.581715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-18 12:06:10.581749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.581886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-18 12:06:10.581925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.582040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-18 12:06:10.582074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.582272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-18 12:06:10.582306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.582425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-18 12:06:10.582459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.582627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-18 12:06:10.582664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.582814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-18 12:06:10.582882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.583045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-18 12:06:10.583087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.583346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-18 12:06:10.583411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.583591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-18 12:06:10.583626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.583736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-18 12:06:10.583770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.583890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-18 12:06:10.583924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.584074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-18 12:06:10.584113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.584295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-18 12:06:10.584350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.584525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-18 12:06:10.584590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.584737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-18 12:06:10.584779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.584968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-18 12:06:10.585025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.585233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-18 12:06:10.585297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.585431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-18 12:06:10.585466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.585615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-18 12:06:10.585651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.585763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-18 12:06:10.585798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.585966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-18 12:06:10.586001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.586176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-18 12:06:10.586232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.586349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-18 12:06:10.586400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.586561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-18 12:06:10.586609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-18 12:06:10.586753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.586795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-18 12:06:10.586936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.586970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-18 12:06:10.587127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.587164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-18 12:06:10.587317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.587354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-18 12:06:10.587470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.587547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-18 12:06:10.587684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.587717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-18 12:06:10.587858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.587892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-18 12:06:10.588064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.588102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-18 12:06:10.588245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.588300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-18 12:06:10.588469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.588514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-18 12:06:10.588631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.588666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-18 12:06:10.588818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.588851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-18 12:06:10.588996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.589033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-18 12:06:10.589189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.589223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-18 12:06:10.589395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.589428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-18 12:06:10.589553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.589588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-18 12:06:10.589721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.589755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-18 12:06:10.589956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.589996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-18 12:06:10.590174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.590231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-18 12:06:10.590393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.590444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-18 12:06:10.590580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.590615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-18 12:06:10.590765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.590816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-18 12:06:10.590949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.590986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-18 12:06:10.591141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.591195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-18 12:06:10.591331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.591367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-18 12:06:10.591485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.591546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-18 12:06:10.591663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.591698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-18 12:06:10.591806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.591850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-18 12:06:10.591996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.592041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-18 12:06:10.592139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.592173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-18 12:06:10.592286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.592320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-18 12:06:10.592468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.592533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-18 12:06:10.592674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.592709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-18 12:06:10.592839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.592876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-18 12:06:10.593078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.593146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-18 12:06:10.593323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.593358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-18 12:06:10.593498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.593531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-18 12:06:10.593650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-18 12:06:10.593683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-18 12:06:10.593870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-18 12:06:10.593905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-18 12:06:10.594080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-18 12:06:10.594129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-18 12:06:10.594325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-18 12:06:10.594361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-18 12:06:10.594537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-18 12:06:10.594572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-18 12:06:10.594704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-18 12:06:10.594738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-18 12:06:10.594865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-18 12:06:10.594903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-18 12:06:10.595034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-18 12:06:10.595083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-18 12:06:10.595318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-18 12:06:10.595356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-18 12:06:10.595507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-18 12:06:10.595542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-18 12:06:10.595650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-18 12:06:10.595684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-18 12:06:10.595820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-18 12:06:10.595862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-18 12:06:10.595972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-18 12:06:10.596012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-18 12:06:10.596145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-18 12:06:10.596179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-18 12:06:10.596344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-18 12:06:10.596394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-18 12:06:10.596543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-18 12:06:10.596577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-18 12:06:10.596718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-18 12:06:10.596752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-18 12:06:10.596864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-18 12:06:10.596898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-18 12:06:10.597051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-18 12:06:10.597088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-18 12:06:10.597233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-18 12:06:10.597280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-18 12:06:10.597498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-18 12:06:10.597552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-18 12:06:10.597657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-18 12:06:10.597691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-18 12:06:10.597808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-18 12:06:10.597842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-18 12:06:10.598048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-18 12:06:10.598106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-18 12:06:10.598239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-18 12:06:10.598292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-18 12:06:10.598475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-18 12:06:10.598526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-18 12:06:10.598634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-18 12:06:10.598668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-18 12:06:10.598800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-18 12:06:10.598834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-18 12:06:10.598971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:37:44.842 [2024-11-18 12:06:10.599189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-18 12:06:10.599255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-18 12:06:10.599410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-18 12:06:10.599446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-18 12:06:10.599569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-18 12:06:10.599604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-18 12:06:10.599731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-18 12:06:10.599766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-18 12:06:10.599900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-18 12:06:10.599934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-18 12:06:10.600053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-18 12:06:10.600088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-18 12:06:10.600239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-18 12:06:10.600275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-18 12:06:10.600436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-18 12:06:10.600469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-18 12:06:10.600611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-18 12:06:10.600645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-18 12:06:10.600751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-18 12:06:10.600789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-18 12:06:10.600910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-18 12:06:10.600944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-18 12:06:10.601039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-18 12:06:10.601073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-18 12:06:10.601260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-18 12:06:10.601298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-18 12:06:10.601439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-18 12:06:10.601476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-18 12:06:10.601645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-18 12:06:10.601679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-18 12:06:10.601801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-18 12:06:10.601839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-18 12:06:10.601955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-18 12:06:10.601993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-18 12:06:10.602156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-18 12:06:10.602208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-18 12:06:10.602373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-18 12:06:10.602406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-18 12:06:10.602531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-18 12:06:10.602580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-18 12:06:10.602712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-18 12:06:10.602750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-18 12:06:10.602894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-18 12:06:10.602929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-18 12:06:10.603029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-18 12:06:10.603064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-18 12:06:10.603240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-18 12:06:10.603278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-18 12:06:10.603416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-18 12:06:10.603458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-18 12:06:10.603587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-18 12:06:10.603621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-18 12:06:10.603725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-18 12:06:10.603759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-18 12:06:10.603899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-18 12:06:10.603943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-18 12:06:10.604121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-18 12:06:10.604174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-18 12:06:10.604312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-18 12:06:10.604366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-18 12:06:10.604547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-18 12:06:10.604584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-18 12:06:10.604689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-18 12:06:10.604723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-18 12:06:10.604861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-18 12:06:10.604896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-18 12:06:10.605008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-18 12:06:10.605041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-18 12:06:10.605204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-18 12:06:10.605238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-18 12:06:10.605402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-18 12:06:10.605453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-18 12:06:10.605616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-18 12:06:10.605664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-18 12:06:10.605796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-18 12:06:10.605831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-18 12:06:10.605950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-18 12:06:10.606002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-18 12:06:10.606128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-18 12:06:10.606167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-18 12:06:10.606336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-18 12:06:10.606374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-18 12:06:10.606566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-18 12:06:10.606601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-18 12:06:10.606742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-18 12:06:10.606776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.606896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-18 12:06:10.606930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.607065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-18 12:06:10.607098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.607257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-18 12:06:10.607294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.607468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-18 12:06:10.607531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.607685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-18 12:06:10.607733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.607856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-18 12:06:10.607903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.608063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-18 12:06:10.608108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.608271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-18 12:06:10.608309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.608448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-18 12:06:10.608487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.608606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-18 12:06:10.608640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.608772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-18 12:06:10.608810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.608978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-18 12:06:10.609013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.609154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-18 12:06:10.609209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.609395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-18 12:06:10.609433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.609579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-18 12:06:10.609613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.609754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-18 12:06:10.609788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.609904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-18 12:06:10.609939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.610108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-18 12:06:10.610146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.610318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-18 12:06:10.610358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.610536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-18 12:06:10.610572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.610683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-18 12:06:10.610717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.610835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-18 12:06:10.610874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.611014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-18 12:06:10.611050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.611159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-18 12:06:10.611194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.611308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-18 12:06:10.611349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.611507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-18 12:06:10.611555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.611686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-18 12:06:10.611734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.611895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-18 12:06:10.611930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.612033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-18 12:06:10.612078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.612221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-18 12:06:10.612256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.612389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-18 12:06:10.612423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.612563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-18 12:06:10.612599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.612717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-18 12:06:10.612752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.612882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-18 12:06:10.612915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.613058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-18 12:06:10.613092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.613206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-18 12:06:10.613242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.613393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-18 12:06:10.613426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-18 12:06:10.613566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-18 12:06:10.613600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-18 12:06:10.614443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-18 12:06:10.614500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-18 12:06:10.614647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-18 12:06:10.614683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-18 12:06:10.614820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-18 12:06:10.614855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-18 12:06:10.614990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-18 12:06:10.615023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-18 12:06:10.615158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-18 12:06:10.615199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-18 12:06:10.615318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-18 12:06:10.615352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-18 12:06:10.615497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-18 12:06:10.615531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-18 12:06:10.615631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-18 12:06:10.615665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-18 12:06:10.615771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-18 12:06:10.615819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-18 12:06:10.615941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-18 12:06:10.615974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-18 12:06:10.616116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-18 12:06:10.616150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-18 12:06:10.616289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-18 12:06:10.616323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-18 12:06:10.616486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-18 12:06:10.616545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-18 12:06:10.616685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-18 12:06:10.616733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-18 12:06:10.616880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-18 12:06:10.616917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-18 12:06:10.617056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-18 12:06:10.617109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-18 12:06:10.617242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-18 12:06:10.617276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-18 12:06:10.617436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-18 12:06:10.617471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-18 12:06:10.617604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-18 12:06:10.617638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-18 12:06:10.617735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-18 12:06:10.617770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-18 12:06:10.617895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-18 12:06:10.617929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-18 12:06:10.618038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-18 12:06:10.618073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-18 12:06:10.618182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-18 12:06:10.618216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-18 12:06:10.618365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-18 12:06:10.618412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-18 12:06:10.618548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-18 12:06:10.618584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-18 12:06:10.618702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-18 12:06:10.618737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-18 12:06:10.618845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-18 12:06:10.618886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-18 12:06:10.619022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-18 12:06:10.619055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-18 12:06:10.619174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-18 12:06:10.619210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-18 12:06:10.619375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-18 12:06:10.619412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-18 12:06:10.619548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-18 12:06:10.619597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-18 12:06:10.619732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-18 12:06:10.619772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-18 12:06:10.619935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-18 12:06:10.619971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-18 12:06:10.620173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-18 12:06:10.620208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-18 12:06:10.620316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-18 12:06:10.620351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-18 12:06:10.620505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-18 12:06:10.620543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-18 12:06:10.620676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-18 12:06:10.620723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-18 12:06:10.620892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-18 12:06:10.620957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-18 12:06:10.621167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-18 12:06:10.621211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-18 12:06:10.621341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-18 12:06:10.621379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-18 12:06:10.621525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-18 12:06:10.621579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-18 12:06:10.621714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-18 12:06:10.621747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-18 12:06:10.621908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-18 12:06:10.621952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-18 12:06:10.622091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-18 12:06:10.622129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-18 12:06:10.622251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-18 12:06:10.622287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-18 12:06:10.622422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-18 12:06:10.622466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-18 12:06:10.622599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-18 12:06:10.622634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-18 12:06:10.622747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-18 12:06:10.622782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-18 12:06:10.622911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-18 12:06:10.622951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-18 12:06:10.623071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-18 12:06:10.623109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-18 12:06:10.623227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-18 12:06:10.623263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-18 12:06:10.623422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-18 12:06:10.623470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-18 12:06:10.623627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-18 12:06:10.623662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-18 12:06:10.623763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-18 12:06:10.623808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-18 12:06:10.623926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-18 12:06:10.623961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-18 12:06:10.624104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-18 12:06:10.624139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-18 12:06:10.624280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-18 12:06:10.624343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-18 12:06:10.624450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-18 12:06:10.624486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-18 12:06:10.624637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-18 12:06:10.624673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-18 12:06:10.624778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-18 12:06:10.624812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-18 12:06:10.624982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-18 12:06:10.625017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-18 12:06:10.625173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-18 12:06:10.625221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-18 12:06:10.625364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-18 12:06:10.625401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-18 12:06:10.625547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-18 12:06:10.625582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-18 12:06:10.625691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-18 12:06:10.625724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-18 12:06:10.625852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-18 12:06:10.625899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-18 12:06:10.626054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-18 12:06:10.626101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-18 12:06:10.626229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-18 12:06:10.626262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-18 12:06:10.626402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-18 12:06:10.626436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-18 12:06:10.626560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-18 12:06:10.626595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-18 12:06:10.626702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-18 12:06:10.626736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-18 12:06:10.626884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-18 12:06:10.626933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.627060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-18 12:06:10.627096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.627226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-18 12:06:10.627262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.627368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-18 12:06:10.627402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.627534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-18 12:06:10.627573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.627695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-18 12:06:10.627732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.627866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-18 12:06:10.627926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.628059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-18 12:06:10.628125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.628274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-18 12:06:10.628309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.628450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-18 12:06:10.628484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.628616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-18 12:06:10.628652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.628758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-18 12:06:10.628810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.628959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-18 12:06:10.629002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.629209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-18 12:06:10.629248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.629424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-18 12:06:10.629459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.629596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-18 12:06:10.629645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.629746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-18 12:06:10.629784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.629954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-18 12:06:10.629987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.630148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-18 12:06:10.630183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.630326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-18 12:06:10.630365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.630472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-18 12:06:10.630517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.630628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-18 12:06:10.630662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.630786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-18 12:06:10.630821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.631026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-18 12:06:10.631064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.631206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-18 12:06:10.631243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.631403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-18 12:06:10.631461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.631624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-18 12:06:10.631664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.631792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-18 12:06:10.631827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.631942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-18 12:06:10.631980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.632092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-18 12:06:10.632127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.632256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-18 12:06:10.632313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.632474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-18 12:06:10.632520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.632649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-18 12:06:10.632684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.632837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-18 12:06:10.632893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.633045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-18 12:06:10.633083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.633199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-18 12:06:10.633237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.633379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-18 12:06:10.633414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.633532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-18 12:06:10.633567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-18 12:06:10.633695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-18 12:06:10.633743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-18 12:06:10.633974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-18 12:06:10.634014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-18 12:06:10.634179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-18 12:06:10.634218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-18 12:06:10.634401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-18 12:06:10.634436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-18 12:06:10.634566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-18 12:06:10.634601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-18 12:06:10.634714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-18 12:06:10.634749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-18 12:06:10.634918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-18 12:06:10.634957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-18 12:06:10.635068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-18 12:06:10.635131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-18 12:06:10.635286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-18 12:06:10.635324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-18 12:06:10.635505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-18 12:06:10.635541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-18 12:06:10.635673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-18 12:06:10.635708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-18 12:06:10.635839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-18 12:06:10.635877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-18 12:06:10.636057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-18 12:06:10.636119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-18 12:06:10.636254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-18 12:06:10.636288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-18 12:06:10.636430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-18 12:06:10.636469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-18 12:06:10.636631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-18 12:06:10.636680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-18 12:06:10.636810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-18 12:06:10.636871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-18 12:06:10.637020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-18 12:06:10.637062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-18 12:06:10.637300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-18 12:06:10.637358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-18 12:06:10.637500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-18 12:06:10.637552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-18 12:06:10.637671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-18 12:06:10.637706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-18 12:06:10.637816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-18 12:06:10.637851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-18 12:06:10.638008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-18 12:06:10.638042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-18 12:06:10.638200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-18 12:06:10.638234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-18 12:06:10.638373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-18 12:06:10.638407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-18 12:06:10.638543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-18 12:06:10.638578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-18 12:06:10.638686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-18 12:06:10.638740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-18 12:06:10.638921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-18 12:06:10.638958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-18 12:06:10.639123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-18 12:06:10.639177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-18 12:06:10.639333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-18 12:06:10.639368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-18 12:06:10.639514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-18 12:06:10.639548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-18 12:06:10.639659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-18 12:06:10.639694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-18 12:06:10.639816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-18 12:06:10.639879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-18 12:06:10.640047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-18 12:06:10.640082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-18 12:06:10.640203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-18 12:06:10.640238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-18 12:06:10.640419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-18 12:06:10.640457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-18 12:06:10.640633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-18 12:06:10.640667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-18 12:06:10.640823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-18 12:06:10.640857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-18 12:06:10.641759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-18 12:06:10.641827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-18 12:06:10.642033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-18 12:06:10.642071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-18 12:06:10.642232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-18 12:06:10.642283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-18 12:06:10.642422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-18 12:06:10.642455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-18 12:06:10.642600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-18 12:06:10.642634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-18 12:06:10.642748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-18 12:06:10.642792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-18 12:06:10.642969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-18 12:06:10.643023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-18 12:06:10.643256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-18 12:06:10.643296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-18 12:06:10.643448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-18 12:06:10.643504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-18 12:06:10.643669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-18 12:06:10.643703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-18 12:06:10.643836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-18 12:06:10.643869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-18 12:06:10.643993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-18 12:06:10.644046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-18 12:06:10.644198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-18 12:06:10.644236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-18 12:06:10.644384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-18 12:06:10.644419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-18 12:06:10.644575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-18 12:06:10.644609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-18 12:06:10.644742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-18 12:06:10.644777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-18 12:06:10.644934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-18 12:06:10.644971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-18 12:06:10.645108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-18 12:06:10.645162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-18 12:06:10.645327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-18 12:06:10.645364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-18 12:06:10.645551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-18 12:06:10.645600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-18 12:06:10.645733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-18 12:06:10.645782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-18 12:06:10.645947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-18 12:06:10.646006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-18 12:06:10.646171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-18 12:06:10.646222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-18 12:06:10.646345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-18 12:06:10.646380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-18 12:06:10.646505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-18 12:06:10.646540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-18 12:06:10.646644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-18 12:06:10.646681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-18 12:06:10.646875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-18 12:06:10.646913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-18 12:06:10.647034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-18 12:06:10.647069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-18 12:06:10.647181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-18 12:06:10.647216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-18 12:06:10.647338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-18 12:06:10.647372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-18 12:06:10.647524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-18 12:06:10.647559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-18 12:06:10.647706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-18 12:06:10.647740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-18 12:06:10.647862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-18 12:06:10.647899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-18 12:06:10.648066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-18 12:06:10.648115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-18 12:06:10.648262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-18 12:06:10.648303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-18 12:06:10.648449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-18 12:06:10.648485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-18 12:06:10.648617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-18 12:06:10.648651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-18 12:06:10.648755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-18 12:06:10.648802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-18 12:06:10.648972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-18 12:06:10.649010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-18 12:06:10.649192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-18 12:06:10.649225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-18 12:06:10.649367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-18 12:06:10.649402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-18 12:06:10.649520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-18 12:06:10.649555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-18 12:06:10.649702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-18 12:06:10.649737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-18 12:06:10.649894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-18 12:06:10.649957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-18 12:06:10.650100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-18 12:06:10.650141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-18 12:06:10.650352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-18 12:06:10.650389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-18 12:06:10.650566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-18 12:06:10.650615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-18 12:06:10.650737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-18 12:06:10.650790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-18 12:06:10.650991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-18 12:06:10.651034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-18 12:06:10.652146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-18 12:06:10.652199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-18 12:06:10.652364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-18 12:06:10.652403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-18 12:06:10.652556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-18 12:06:10.652592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-18 12:06:10.652728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-18 12:06:10.652778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-18 12:06:10.652938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-18 12:06:10.652976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-18 12:06:10.653152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-18 12:06:10.653195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-18 12:06:10.653369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-18 12:06:10.653406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-18 12:06:10.653546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-18 12:06:10.653580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-18 12:06:10.653687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-18 12:06:10.653721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-18 12:06:10.653882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-18 12:06:10.653928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-18 12:06:10.654064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-18 12:06:10.654111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-18 12:06:10.654291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-18 12:06:10.654329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-18 12:06:10.654483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-18 12:06:10.654555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-18 12:06:10.654672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-18 12:06:10.654706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-18 12:06:10.654845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-18 12:06:10.654886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-18 12:06:10.655022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-18 12:06:10.655067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-18 12:06:10.655257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-18 12:06:10.655291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-18 12:06:10.655499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-18 12:06:10.655535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-18 12:06:10.655645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-18 12:06:10.655678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-18 12:06:10.655837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-18 12:06:10.655879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-18 12:06:10.656008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-18 12:06:10.656067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-18 12:06:10.656225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-18 12:06:10.656263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-18 12:06:10.656384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-18 12:06:10.656436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-18 12:06:10.656557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-18 12:06:10.656592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-18 12:06:10.656703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-18 12:06:10.656737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-18 12:06:10.656936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-18 12:06:10.656980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-18 12:06:10.657146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-18 12:06:10.657184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-18 12:06:10.657304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-18 12:06:10.657353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-18 12:06:10.657513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-18 12:06:10.657565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-18 12:06:10.657676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-18 12:06:10.657711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-18 12:06:10.657855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-18 12:06:10.657910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-18 12:06:10.658077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-18 12:06:10.658127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-18 12:06:10.658333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-18 12:06:10.658393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-18 12:06:10.658538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-18 12:06:10.658573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-18 12:06:10.658713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-18 12:06:10.658762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-18 12:06:10.658888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-18 12:06:10.658925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-18 12:06:10.659043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-18 12:06:10.659079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-18 12:06:10.659203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-18 12:06:10.659238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-18 12:06:10.659354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-18 12:06:10.659402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-18 12:06:10.659546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-18 12:06:10.659599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-18 12:06:10.659714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-18 12:06:10.659751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-18 12:06:10.659862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-18 12:06:10.659915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-18 12:06:10.660075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-18 12:06:10.660113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-18 12:06:10.660255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-18 12:06:10.660291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-18 12:06:10.660402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-18 12:06:10.660436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-18 12:06:10.660549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-18 12:06:10.660584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-18 12:06:10.660702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-18 12:06:10.660737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-18 12:06:10.660894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-18 12:06:10.660929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-18 12:06:10.661063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-18 12:06:10.661117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-18 12:06:10.661230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-18 12:06:10.661265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-18 12:06:10.661404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-18 12:06:10.661444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-18 12:06:10.661590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-18 12:06:10.661626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-18 12:06:10.661736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-18 12:06:10.661771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-18 12:06:10.661953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-18 12:06:10.661988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-18 12:06:10.662128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-18 12:06:10.662163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-18 12:06:10.662308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-18 12:06:10.662343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-18 12:06:10.662455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-18 12:06:10.662488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-18 12:06:10.662614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-18 12:06:10.662649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-18 12:06:10.662789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-18 12:06:10.662828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-18 12:06:10.662977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-18 12:06:10.663018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-18 12:06:10.663152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-18 12:06:10.663190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-18 12:06:10.663363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-18 12:06:10.663402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-18 12:06:10.663509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-18 12:06:10.663546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-18 12:06:10.663701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-18 12:06:10.663755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-18 12:06:10.663920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-18 12:06:10.663960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-18 12:06:10.664159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-18 12:06:10.664206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-18 12:06:10.664336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-18 12:06:10.664370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-18 12:06:10.664516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-18 12:06:10.664552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.664684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.664718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.664886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.664923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.665076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.665114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.665294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.665332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.665474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.665521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.665641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.665676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.665826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.665879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.666057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.666098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.666258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.666308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.666447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.666495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.666636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.666671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.666776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.666844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.666973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.667020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.667166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.667203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.667326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.667369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.667523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.667558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.667667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.667701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.667850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.667919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.668058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.668098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.668216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.668274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.668425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.668464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.668619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.668655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.668806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.668854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.669048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.669104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.669262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.669320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.669533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.669569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.669677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.669712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.669939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.670006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.670150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.670204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.670351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.670389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.670560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.670594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.670701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.670735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.670916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.670974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.671153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.671211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.671357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.671418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.671558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.671592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.671720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.671758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.671945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.672010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.672201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.672242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.672407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.672442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.672575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.672610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.672762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.672808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.673029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.673066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.673276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.673350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.673561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.673597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.673710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-18 12:06:10.673744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-18 12:06:10.673855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-18 12:06:10.673899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-18 12:06:10.674037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-18 12:06:10.674070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-18 12:06:10.674201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-18 12:06:10.674248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-18 12:06:10.674404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-18 12:06:10.674458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-18 12:06:10.674658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-18 12:06:10.674695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-18 12:06:10.674897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-18 12:06:10.674968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-18 12:06:10.675178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-18 12:06:10.675217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-18 12:06:10.675336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-18 12:06:10.675373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-18 12:06:10.675503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-18 12:06:10.675559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-18 12:06:10.675692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-18 12:06:10.675726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-18 12:06:10.675836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-18 12:06:10.675870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-18 12:06:10.676011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-18 12:06:10.676048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-18 12:06:10.676208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-18 12:06:10.676245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-18 12:06:10.676380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-18 12:06:10.676414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-18 12:06:10.676546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-18 12:06:10.676582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-18 12:06:10.676694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-18 12:06:10.676727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-18 12:06:10.676867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-18 12:06:10.676930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-18 12:06:10.677046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-18 12:06:10.677084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-18 12:06:10.677202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-18 12:06:10.677240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-18 12:06:10.677386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-18 12:06:10.677420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-18 12:06:10.677559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-18 12:06:10.677594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-18 12:06:10.677705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-18 12:06:10.677738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-18 12:06:10.677873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-18 12:06:10.677916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-18 12:06:10.678025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-18 12:06:10.678059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-18 12:06:10.678215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-18 12:06:10.678286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-18 12:06:10.678441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-18 12:06:10.678497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-18 12:06:10.678645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-18 12:06:10.678681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-18 12:06:10.678879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-18 12:06:10.678918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-18 12:06:10.679130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-18 12:06:10.679181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-18 12:06:10.679333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-18 12:06:10.679371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-18 12:06:10.679530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-18 12:06:10.679566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-18 12:06:10.679673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-18 12:06:10.679708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-18 12:06:10.679810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-18 12:06:10.679844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:45.149 [2024-11-18 12:06:10.680018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.149 [2024-11-18 12:06:10.680057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.149 qpair failed and we were unable to recover it. 00:37:45.149 [2024-11-18 12:06:10.680278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.149 [2024-11-18 12:06:10.680316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.149 qpair failed and we were unable to recover it. 00:37:45.149 [2024-11-18 12:06:10.680458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.149 [2024-11-18 12:06:10.680518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.149 qpair failed and we were unable to recover it. 00:37:45.149 [2024-11-18 12:06:10.680663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.149 [2024-11-18 12:06:10.680698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.149 qpair failed and we were unable to recover it. 00:37:45.149 [2024-11-18 12:06:10.680874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.149 [2024-11-18 12:06:10.680908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.149 qpair failed and we were unable to recover it. 00:37:45.149 [2024-11-18 12:06:10.681038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.149 [2024-11-18 12:06:10.681075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.149 qpair failed and we were unable to recover it. 00:37:45.149 [2024-11-18 12:06:10.681206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.149 [2024-11-18 12:06:10.681245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.149 qpair failed and we were unable to recover it. 00:37:45.149 [2024-11-18 12:06:10.681450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.149 [2024-11-18 12:06:10.681487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.149 qpair failed and we were unable to recover it. 00:37:45.149 [2024-11-18 12:06:10.681635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.149 [2024-11-18 12:06:10.681669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.149 qpair failed and we were unable to recover it. 00:37:45.149 [2024-11-18 12:06:10.681779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.149 [2024-11-18 12:06:10.681822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.149 qpair failed and we were unable to recover it. 00:37:45.149 [2024-11-18 12:06:10.681935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.149 [2024-11-18 12:06:10.681996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.149 qpair failed and we were unable to recover it. 00:37:45.149 [2024-11-18 12:06:10.682147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.149 [2024-11-18 12:06:10.682184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.149 qpair failed and we were unable to recover it. 00:37:45.149 [2024-11-18 12:06:10.682330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.149 [2024-11-18 12:06:10.682374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.149 qpair failed and we were unable to recover it. 00:37:45.150 [2024-11-18 12:06:10.682530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.150 [2024-11-18 12:06:10.682565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.150 qpair failed and we were unable to recover it. 00:37:45.150 [2024-11-18 12:06:10.682677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.150 [2024-11-18 12:06:10.682711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.150 qpair failed and we were unable to recover it. 00:37:45.150 [2024-11-18 12:06:10.682825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.150 [2024-11-18 12:06:10.682859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.150 qpair failed and we were unable to recover it. 00:37:45.150 [2024-11-18 12:06:10.683049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.150 [2024-11-18 12:06:10.683105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.150 qpair failed and we were unable to recover it. 00:37:45.150 [2024-11-18 12:06:10.683271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.150 [2024-11-18 12:06:10.683331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.150 qpair failed and we were unable to recover it. 00:37:45.150 [2024-11-18 12:06:10.683456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.150 [2024-11-18 12:06:10.683508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.150 qpair failed and we were unable to recover it. 00:37:45.150 [2024-11-18 12:06:10.683642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.150 [2024-11-18 12:06:10.683676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.150 qpair failed and we were unable to recover it. 00:37:45.150 [2024-11-18 12:06:10.683783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.150 [2024-11-18 12:06:10.683817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.150 qpair failed and we were unable to recover it. 00:37:45.150 [2024-11-18 12:06:10.683958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.150 [2024-11-18 12:06:10.683998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.150 qpair failed and we were unable to recover it. 00:37:45.150 [2024-11-18 12:06:10.684205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.150 [2024-11-18 12:06:10.684239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.150 qpair failed and we were unable to recover it. 00:37:45.150 [2024-11-18 12:06:10.684376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.150 [2024-11-18 12:06:10.684414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.150 qpair failed and we were unable to recover it. 00:37:45.150 [2024-11-18 12:06:10.684568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.150 [2024-11-18 12:06:10.684603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.150 qpair failed and we were unable to recover it. 00:37:45.150 [2024-11-18 12:06:10.684731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.150 [2024-11-18 12:06:10.684768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.150 qpair failed and we were unable to recover it. 00:37:45.150 [2024-11-18 12:06:10.684920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.150 [2024-11-18 12:06:10.684958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.150 qpair failed and we were unable to recover it. 00:37:45.150 [2024-11-18 12:06:10.685078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.150 [2024-11-18 12:06:10.685117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.150 qpair failed and we were unable to recover it. 00:37:45.150 [2024-11-18 12:06:10.685259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.150 [2024-11-18 12:06:10.685298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.150 qpair failed and we were unable to recover it. 00:37:45.150 [2024-11-18 12:06:10.685446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.150 [2024-11-18 12:06:10.685485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.150 qpair failed and we were unable to recover it. 00:37:45.150 [2024-11-18 12:06:10.685638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.150 [2024-11-18 12:06:10.685672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.150 qpair failed and we were unable to recover it. 00:37:45.150 [2024-11-18 12:06:10.685809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.150 [2024-11-18 12:06:10.685869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.150 qpair failed and we were unable to recover it. 00:37:45.150 [2024-11-18 12:06:10.686042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.150 [2024-11-18 12:06:10.686080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.150 qpair failed and we were unable to recover it. 00:37:45.150 [2024-11-18 12:06:10.686197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.150 [2024-11-18 12:06:10.686237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.150 qpair failed and we were unable to recover it. 00:37:45.150 [2024-11-18 12:06:10.686413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.150 [2024-11-18 12:06:10.686460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.150 qpair failed and we were unable to recover it. 00:37:45.150 [2024-11-18 12:06:10.686618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.150 [2024-11-18 12:06:10.686667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.150 qpair failed and we were unable to recover it. 00:37:45.150 [2024-11-18 12:06:10.686790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.150 [2024-11-18 12:06:10.686825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.150 qpair failed and we were unable to recover it. 00:37:45.150 [2024-11-18 12:06:10.686957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.150 [2024-11-18 12:06:10.686991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.150 qpair failed and we were unable to recover it. 00:37:45.150 [2024-11-18 12:06:10.687160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.150 [2024-11-18 12:06:10.687223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.150 qpair failed and we were unable to recover it. 00:37:45.150 [2024-11-18 12:06:10.687378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.150 [2024-11-18 12:06:10.687422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.150 qpair failed and we were unable to recover it. 00:37:45.150 [2024-11-18 12:06:10.687582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.150 [2024-11-18 12:06:10.687617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.150 qpair failed and we were unable to recover it. 00:37:45.150 [2024-11-18 12:06:10.687720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.150 [2024-11-18 12:06:10.687754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.150 qpair failed and we were unable to recover it. 00:37:45.150 [2024-11-18 12:06:10.687934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.150 [2024-11-18 12:06:10.687990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.150 qpair failed and we were unable to recover it. 00:37:45.150 [2024-11-18 12:06:10.688179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.150 [2024-11-18 12:06:10.688241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.150 qpair failed and we were unable to recover it. 00:37:45.150 [2024-11-18 12:06:10.688385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.150 [2024-11-18 12:06:10.688418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.150 qpair failed and we were unable to recover it. 00:37:45.150 [2024-11-18 12:06:10.688548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.150 [2024-11-18 12:06:10.688582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.150 qpair failed and we were unable to recover it. 00:37:45.150 [2024-11-18 12:06:10.688740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.150 [2024-11-18 12:06:10.688774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.150 qpair failed and we were unable to recover it. 00:37:45.150 [2024-11-18 12:06:10.688957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.150 [2024-11-18 12:06:10.689032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.150 qpair failed and we were unable to recover it. 00:37:45.150 [2024-11-18 12:06:10.689177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.150 [2024-11-18 12:06:10.689233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.150 qpair failed and we were unable to recover it. 00:37:45.150 [2024-11-18 12:06:10.689383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.151 [2024-11-18 12:06:10.689419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.151 qpair failed and we were unable to recover it. 00:37:45.151 [2024-11-18 12:06:10.689608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.151 [2024-11-18 12:06:10.689646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.151 qpair failed and we were unable to recover it. 00:37:45.151 [2024-11-18 12:06:10.689783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.151 [2024-11-18 12:06:10.689834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.151 qpair failed and we were unable to recover it. 00:37:45.151 [2024-11-18 12:06:10.689999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.151 [2024-11-18 12:06:10.690038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.151 qpair failed and we were unable to recover it. 00:37:45.151 [2024-11-18 12:06:10.690186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.151 [2024-11-18 12:06:10.690221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.151 qpair failed and we were unable to recover it. 00:37:45.151 [2024-11-18 12:06:10.690361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.151 [2024-11-18 12:06:10.690396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.151 qpair failed and we were unable to recover it. 00:37:45.151 [2024-11-18 12:06:10.690549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.151 [2024-11-18 12:06:10.690598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.151 qpair failed and we were unable to recover it. 00:37:45.151 [2024-11-18 12:06:10.690724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.151 [2024-11-18 12:06:10.690775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.151 qpair failed and we were unable to recover it. 00:37:45.151 [2024-11-18 12:06:10.690906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.151 [2024-11-18 12:06:10.690942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.151 qpair failed and we were unable to recover it. 00:37:45.151 [2024-11-18 12:06:10.691076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.151 [2024-11-18 12:06:10.691110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.151 qpair failed and we were unable to recover it. 00:37:45.151 [2024-11-18 12:06:10.691242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.151 [2024-11-18 12:06:10.691276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.151 qpair failed and we were unable to recover it. 00:37:45.151 [2024-11-18 12:06:10.691429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.151 [2024-11-18 12:06:10.691480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.151 qpair failed and we were unable to recover it. 00:37:45.151 [2024-11-18 12:06:10.691603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.151 [2024-11-18 12:06:10.691639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.151 qpair failed and we were unable to recover it. 00:37:45.151 [2024-11-18 12:06:10.691778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.151 [2024-11-18 12:06:10.691819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.151 qpair failed and we were unable to recover it. 00:37:45.151 [2024-11-18 12:06:10.691996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.151 [2024-11-18 12:06:10.692075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.151 qpair failed and we were unable to recover it. 00:37:45.151 [2024-11-18 12:06:10.692285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.151 [2024-11-18 12:06:10.692339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.151 qpair failed and we were unable to recover it. 00:37:45.151 [2024-11-18 12:06:10.692514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.151 [2024-11-18 12:06:10.692550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.151 qpair failed and we were unable to recover it. 00:37:45.151 [2024-11-18 12:06:10.692722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.151 [2024-11-18 12:06:10.692785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.151 qpair failed and we were unable to recover it. 00:37:45.151 [2024-11-18 12:06:10.692983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.151 [2024-11-18 12:06:10.693040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.151 qpair failed and we were unable to recover it. 00:37:45.151 [2024-11-18 12:06:10.693218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.151 [2024-11-18 12:06:10.693281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.151 qpair failed and we were unable to recover it. 00:37:45.151 [2024-11-18 12:06:10.693432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.151 [2024-11-18 12:06:10.693480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.151 qpair failed and we were unable to recover it. 00:37:45.151 [2024-11-18 12:06:10.693642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.151 [2024-11-18 12:06:10.693691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.151 qpair failed and we were unable to recover it. 00:37:45.151 [2024-11-18 12:06:10.693864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.151 [2024-11-18 12:06:10.693962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.151 qpair failed and we were unable to recover it. 00:37:45.151 [2024-11-18 12:06:10.694085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.151 [2024-11-18 12:06:10.694124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.151 qpair failed and we were unable to recover it. 00:37:45.151 [2024-11-18 12:06:10.694350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.151 [2024-11-18 12:06:10.694409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.151 qpair failed and we were unable to recover it. 00:37:45.151 [2024-11-18 12:06:10.694528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.151 [2024-11-18 12:06:10.694565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.151 qpair failed and we were unable to recover it. 00:37:45.151 [2024-11-18 12:06:10.694744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.151 [2024-11-18 12:06:10.694785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.151 qpair failed and we were unable to recover it. 00:37:45.151 [2024-11-18 12:06:10.694908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.151 [2024-11-18 12:06:10.694961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.151 qpair failed and we were unable to recover it. 00:37:45.151 [2024-11-18 12:06:10.695166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.151 [2024-11-18 12:06:10.695228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.151 qpair failed and we were unable to recover it. 00:37:45.151 [2024-11-18 12:06:10.695368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.151 [2024-11-18 12:06:10.695404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.151 qpair failed and we were unable to recover it. 00:37:45.151 [2024-11-18 12:06:10.695540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.151 [2024-11-18 12:06:10.695610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.151 qpair failed and we were unable to recover it. 00:37:45.151 [2024-11-18 12:06:10.695733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.151 [2024-11-18 12:06:10.695779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.151 qpair failed and we were unable to recover it. 00:37:45.151 [2024-11-18 12:06:10.695946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.151 [2024-11-18 12:06:10.696001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.151 qpair failed and we were unable to recover it. 00:37:45.151 [2024-11-18 12:06:10.696182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.151 [2024-11-18 12:06:10.696219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.151 qpair failed and we were unable to recover it. 00:37:45.151 [2024-11-18 12:06:10.696335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.151 [2024-11-18 12:06:10.696373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.151 qpair failed and we were unable to recover it. 00:37:45.151 [2024-11-18 12:06:10.696551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.151 [2024-11-18 12:06:10.696599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.151 qpair failed and we were unable to recover it. 00:37:45.151 [2024-11-18 12:06:10.696776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.151 [2024-11-18 12:06:10.696839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.151 qpair failed and we were unable to recover it. 00:37:45.151 [2024-11-18 12:06:10.696979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.152 [2024-11-18 12:06:10.697040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.152 qpair failed and we were unable to recover it. 00:37:45.152 [2024-11-18 12:06:10.697146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.152 [2024-11-18 12:06:10.697181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.152 qpair failed and we were unable to recover it. 00:37:45.152 [2024-11-18 12:06:10.697317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.152 [2024-11-18 12:06:10.697351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.152 qpair failed and we were unable to recover it. 00:37:45.152 [2024-11-18 12:06:10.697465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.152 [2024-11-18 12:06:10.697515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.152 qpair failed and we were unable to recover it. 00:37:45.152 [2024-11-18 12:06:10.697676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.152 [2024-11-18 12:06:10.697716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.152 qpair failed and we were unable to recover it. 00:37:45.152 [2024-11-18 12:06:10.697880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.152 [2024-11-18 12:06:10.697918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.152 qpair failed and we were unable to recover it. 00:37:45.152 [2024-11-18 12:06:10.698092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.152 [2024-11-18 12:06:10.698156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.152 qpair failed and we were unable to recover it. 00:37:45.152 [2024-11-18 12:06:10.698278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.152 [2024-11-18 12:06:10.698316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.152 qpair failed and we were unable to recover it. 00:37:45.152 [2024-11-18 12:06:10.698466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.152 [2024-11-18 12:06:10.698545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.152 qpair failed and we were unable to recover it. 00:37:45.152 [2024-11-18 12:06:10.698717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.152 [2024-11-18 12:06:10.698765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.152 qpair failed and we were unable to recover it. 00:37:45.152 [2024-11-18 12:06:10.698950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.152 [2024-11-18 12:06:10.699010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.152 qpair failed and we were unable to recover it. 00:37:45.152 [2024-11-18 12:06:10.699239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.152 [2024-11-18 12:06:10.699306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.152 qpair failed and we were unable to recover it. 00:37:45.152 [2024-11-18 12:06:10.699406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.152 [2024-11-18 12:06:10.699440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.152 qpair failed and we were unable to recover it. 00:37:45.152 [2024-11-18 12:06:10.699594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.152 [2024-11-18 12:06:10.699648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.152 qpair failed and we were unable to recover it. 00:37:45.152 [2024-11-18 12:06:10.699865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.152 [2024-11-18 12:06:10.699934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.152 qpair failed and we were unable to recover it. 00:37:45.152 [2024-11-18 12:06:10.700103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.152 [2024-11-18 12:06:10.700166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.152 qpair failed and we were unable to recover it. 00:37:45.152 [2024-11-18 12:06:10.700318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.152 [2024-11-18 12:06:10.700368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.152 qpair failed and we were unable to recover it. 00:37:45.152 [2024-11-18 12:06:10.700508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.152 [2024-11-18 12:06:10.700550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.152 qpair failed and we were unable to recover it. 00:37:45.152 [2024-11-18 12:06:10.700683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.152 [2024-11-18 12:06:10.700717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.152 qpair failed and we were unable to recover it. 00:37:45.152 [2024-11-18 12:06:10.700880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.152 [2024-11-18 12:06:10.700918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.152 qpair failed and we were unable to recover it. 00:37:45.152 [2024-11-18 12:06:10.701139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.152 [2024-11-18 12:06:10.701176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.152 qpair failed and we were unable to recover it. 00:37:45.152 [2024-11-18 12:06:10.701327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.152 [2024-11-18 12:06:10.701374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.152 qpair failed and we were unable to recover it. 00:37:45.152 [2024-11-18 12:06:10.701498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.152 [2024-11-18 12:06:10.701551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.152 qpair failed and we were unable to recover it. 00:37:45.152 [2024-11-18 12:06:10.701717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.152 [2024-11-18 12:06:10.701766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.152 qpair failed and we were unable to recover it. 00:37:45.152 [2024-11-18 12:06:10.701912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.152 [2024-11-18 12:06:10.701977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.152 qpair failed and we were unable to recover it. 00:37:45.152 [2024-11-18 12:06:10.702142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.152 [2024-11-18 12:06:10.702194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.152 qpair failed and we were unable to recover it. 00:37:45.152 [2024-11-18 12:06:10.702359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.152 [2024-11-18 12:06:10.702394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.152 qpair failed and we were unable to recover it. 00:37:45.152 [2024-11-18 12:06:10.702515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.152 [2024-11-18 12:06:10.702550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.152 qpair failed and we were unable to recover it. 00:37:45.152 [2024-11-18 12:06:10.702688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.152 [2024-11-18 12:06:10.702732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.152 qpair failed and we were unable to recover it. 00:37:45.152 [2024-11-18 12:06:10.702871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.152 [2024-11-18 12:06:10.702907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.152 qpair failed and we were unable to recover it. 00:37:45.152 [2024-11-18 12:06:10.703019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.152 [2024-11-18 12:06:10.703059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.152 qpair failed and we were unable to recover it. 00:37:45.152 [2024-11-18 12:06:10.703199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.152 [2024-11-18 12:06:10.703234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.152 qpair failed and we were unable to recover it. 00:37:45.152 [2024-11-18 12:06:10.703361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.152 [2024-11-18 12:06:10.703394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.152 qpair failed and we were unable to recover it. 00:37:45.152 [2024-11-18 12:06:10.703556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.152 [2024-11-18 12:06:10.703604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.152 qpair failed and we were unable to recover it. 00:37:45.152 [2024-11-18 12:06:10.703749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.152 [2024-11-18 12:06:10.703802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.152 qpair failed and we were unable to recover it. 00:37:45.152 [2024-11-18 12:06:10.704003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.152 [2024-11-18 12:06:10.704067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.152 qpair failed and we were unable to recover it. 00:37:45.152 [2024-11-18 12:06:10.704202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.152 [2024-11-18 12:06:10.704236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.152 qpair failed and we were unable to recover it. 00:37:45.152 [2024-11-18 12:06:10.704375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.704421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.153 qpair failed and we were unable to recover it. 00:37:45.153 [2024-11-18 12:06:10.704590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.704629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.153 qpair failed and we were unable to recover it. 00:37:45.153 [2024-11-18 12:06:10.704803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.704905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.153 qpair failed and we were unable to recover it. 00:37:45.153 [2024-11-18 12:06:10.705061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.705116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.153 qpair failed and we were unable to recover it. 00:37:45.153 [2024-11-18 12:06:10.705252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.705288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.153 qpair failed and we were unable to recover it. 00:37:45.153 [2024-11-18 12:06:10.705400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.705436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.153 qpair failed and we were unable to recover it. 00:37:45.153 [2024-11-18 12:06:10.705552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.705587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.153 qpair failed and we were unable to recover it. 00:37:45.153 [2024-11-18 12:06:10.705741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.705787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.153 qpair failed and we were unable to recover it. 00:37:45.153 [2024-11-18 12:06:10.705945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.705982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.153 qpair failed and we were unable to recover it. 00:37:45.153 [2024-11-18 12:06:10.706164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.706207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.153 qpair failed and we were unable to recover it. 00:37:45.153 [2024-11-18 12:06:10.706384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.706438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.153 qpair failed and we were unable to recover it. 00:37:45.153 [2024-11-18 12:06:10.706611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.706647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.153 qpair failed and we were unable to recover it. 00:37:45.153 [2024-11-18 12:06:10.706762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.706797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.153 qpair failed and we were unable to recover it. 00:37:45.153 [2024-11-18 12:06:10.706962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.707013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.153 qpair failed and we were unable to recover it. 00:37:45.153 [2024-11-18 12:06:10.707166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.707233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.153 qpair failed and we were unable to recover it. 00:37:45.153 [2024-11-18 12:06:10.707360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.707394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.153 qpair failed and we were unable to recover it. 00:37:45.153 [2024-11-18 12:06:10.707503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.707539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.153 qpair failed and we were unable to recover it. 00:37:45.153 [2024-11-18 12:06:10.707666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.707719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.153 qpair failed and we were unable to recover it. 00:37:45.153 [2024-11-18 12:06:10.707828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.707862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.153 qpair failed and we were unable to recover it. 00:37:45.153 [2024-11-18 12:06:10.708012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.708047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.153 qpair failed and we were unable to recover it. 00:37:45.153 [2024-11-18 12:06:10.708146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.708181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.153 qpair failed and we were unable to recover it. 00:37:45.153 [2024-11-18 12:06:10.708315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.708360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.153 qpair failed and we were unable to recover it. 00:37:45.153 [2024-11-18 12:06:10.708531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.708568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.153 qpair failed and we were unable to recover it. 00:37:45.153 [2024-11-18 12:06:10.708717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.708752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.153 qpair failed and we were unable to recover it. 00:37:45.153 [2024-11-18 12:06:10.708939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.708973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.153 qpair failed and we were unable to recover it. 00:37:45.153 [2024-11-18 12:06:10.709082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.709115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.153 qpair failed and we were unable to recover it. 00:37:45.153 [2024-11-18 12:06:10.709244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.709292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.153 qpair failed and we were unable to recover it. 00:37:45.153 [2024-11-18 12:06:10.709445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.709500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.153 qpair failed and we were unable to recover it. 00:37:45.153 [2024-11-18 12:06:10.709667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.709708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.153 qpair failed and we were unable to recover it. 00:37:45.153 [2024-11-18 12:06:10.709855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.709895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.153 qpair failed and we were unable to recover it. 00:37:45.153 [2024-11-18 12:06:10.710093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.710153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.153 qpair failed and we were unable to recover it. 00:37:45.153 [2024-11-18 12:06:10.710288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.710324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.153 qpair failed and we were unable to recover it. 00:37:45.153 [2024-11-18 12:06:10.710499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.710535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.153 qpair failed and we were unable to recover it. 00:37:45.153 [2024-11-18 12:06:10.710696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.710745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.153 qpair failed and we were unable to recover it. 00:37:45.153 [2024-11-18 12:06:10.710930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.710996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.153 qpair failed and we were unable to recover it. 00:37:45.153 [2024-11-18 12:06:10.711115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.711165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.153 qpair failed and we were unable to recover it. 00:37:45.153 [2024-11-18 12:06:10.711345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.711403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.153 qpair failed and we were unable to recover it. 00:37:45.153 [2024-11-18 12:06:10.711531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.153 [2024-11-18 12:06:10.711584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.154 qpair failed and we were unable to recover it. 00:37:45.154 [2024-11-18 12:06:10.711695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.154 [2024-11-18 12:06:10.711728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.154 qpair failed and we were unable to recover it. 00:37:45.154 [2024-11-18 12:06:10.711879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.154 [2024-11-18 12:06:10.711915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.154 qpair failed and we were unable to recover it. 00:37:45.154 [2024-11-18 12:06:10.712078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.154 [2024-11-18 12:06:10.712112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.154 qpair failed and we were unable to recover it. 00:37:45.154 [2024-11-18 12:06:10.712357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.154 [2024-11-18 12:06:10.712392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.154 qpair failed and we were unable to recover it. 00:37:45.154 [2024-11-18 12:06:10.712554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.154 [2024-11-18 12:06:10.712602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.154 qpair failed and we were unable to recover it. 00:37:45.154 [2024-11-18 12:06:10.712736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.154 [2024-11-18 12:06:10.712785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.154 qpair failed and we were unable to recover it. 00:37:45.154 [2024-11-18 12:06:10.712925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.154 [2024-11-18 12:06:10.712974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.154 qpair failed and we were unable to recover it. 00:37:45.154 [2024-11-18 12:06:10.713133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.154 [2024-11-18 12:06:10.713187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.154 qpair failed and we were unable to recover it. 00:37:45.154 [2024-11-18 12:06:10.713355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.154 [2024-11-18 12:06:10.713419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.154 qpair failed and we were unable to recover it. 00:37:45.154 [2024-11-18 12:06:10.713578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.154 [2024-11-18 12:06:10.713614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.154 qpair failed and we were unable to recover it. 00:37:45.154 [2024-11-18 12:06:10.713745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.154 [2024-11-18 12:06:10.713784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.154 qpair failed and we were unable to recover it. 00:37:45.154 [2024-11-18 12:06:10.713973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.154 [2024-11-18 12:06:10.714017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.154 qpair failed and we were unable to recover it. 00:37:45.154 [2024-11-18 12:06:10.714142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.154 [2024-11-18 12:06:10.714180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.154 qpair failed and we were unable to recover it. 00:37:45.154 [2024-11-18 12:06:10.714340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.154 [2024-11-18 12:06:10.714378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.154 qpair failed and we were unable to recover it. 00:37:45.154 [2024-11-18 12:06:10.714576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.154 [2024-11-18 12:06:10.714611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.154 qpair failed and we were unable to recover it. 00:37:45.154 [2024-11-18 12:06:10.714715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.154 [2024-11-18 12:06:10.714748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.154 qpair failed and we were unable to recover it. 00:37:45.154 [2024-11-18 12:06:10.714891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.154 [2024-11-18 12:06:10.714942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.154 qpair failed and we were unable to recover it. 00:37:45.154 [2024-11-18 12:06:10.715094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.154 [2024-11-18 12:06:10.715131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.154 qpair failed and we were unable to recover it. 00:37:45.154 [2024-11-18 12:06:10.715247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.154 [2024-11-18 12:06:10.715284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.154 qpair failed and we were unable to recover it. 00:37:45.154 [2024-11-18 12:06:10.715465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.154 [2024-11-18 12:06:10.715506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.154 qpair failed and we were unable to recover it. 00:37:45.154 [2024-11-18 12:06:10.715631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.154 [2024-11-18 12:06:10.715679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.154 qpair failed and we were unable to recover it. 00:37:45.154 [2024-11-18 12:06:10.715863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.154 [2024-11-18 12:06:10.715917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.154 qpair failed and we were unable to recover it. 00:37:45.154 [2024-11-18 12:06:10.716116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.154 [2024-11-18 12:06:10.716156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.154 qpair failed and we were unable to recover it. 00:37:45.154 [2024-11-18 12:06:10.716333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.154 [2024-11-18 12:06:10.716372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.154 qpair failed and we were unable to recover it. 00:37:45.154 [2024-11-18 12:06:10.716567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.154 [2024-11-18 12:06:10.716602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.154 qpair failed and we were unable to recover it. 00:37:45.154 [2024-11-18 12:06:10.716730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.154 [2024-11-18 12:06:10.716781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.154 qpair failed and we were unable to recover it. 00:37:45.154 [2024-11-18 12:06:10.716946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.154 [2024-11-18 12:06:10.716984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.154 qpair failed and we were unable to recover it. 00:37:45.154 [2024-11-18 12:06:10.717135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.154 [2024-11-18 12:06:10.717187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.154 qpair failed and we were unable to recover it. 00:37:45.154 [2024-11-18 12:06:10.717317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.154 [2024-11-18 12:06:10.717362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.154 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.717486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.155 [2024-11-18 12:06:10.717529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.155 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.717684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.155 [2024-11-18 12:06:10.717732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.155 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.717915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.155 [2024-11-18 12:06:10.717970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.155 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.718195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.155 [2024-11-18 12:06:10.718231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.155 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.718394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.155 [2024-11-18 12:06:10.718444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.155 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.718588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.155 [2024-11-18 12:06:10.718623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.155 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.718759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.155 [2024-11-18 12:06:10.718807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.155 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.718919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.155 [2024-11-18 12:06:10.718956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.155 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.719108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.155 [2024-11-18 12:06:10.719160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.155 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.719344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.155 [2024-11-18 12:06:10.719383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.155 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.719557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.155 [2024-11-18 12:06:10.719592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.155 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.719752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.155 [2024-11-18 12:06:10.719801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.155 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.719974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.155 [2024-11-18 12:06:10.720052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.155 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.720270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.155 [2024-11-18 12:06:10.720329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.155 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.720482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.155 [2024-11-18 12:06:10.720528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.155 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.720662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.155 [2024-11-18 12:06:10.720698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.155 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.720822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.155 [2024-11-18 12:06:10.720865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.155 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.720987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.155 [2024-11-18 12:06:10.721025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.155 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.721180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.155 [2024-11-18 12:06:10.721219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.155 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.721344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.155 [2024-11-18 12:06:10.721383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.155 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.721550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.155 [2024-11-18 12:06:10.721585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.155 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.721717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.155 [2024-11-18 12:06:10.721753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.155 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.721887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.155 [2024-11-18 12:06:10.721946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.155 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.722118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.155 [2024-11-18 12:06:10.722153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.155 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.722314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.155 [2024-11-18 12:06:10.722353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.155 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.722500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.155 [2024-11-18 12:06:10.722557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.155 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.722668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.155 [2024-11-18 12:06:10.722703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.155 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.722849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.155 [2024-11-18 12:06:10.722889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.155 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.723089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.155 [2024-11-18 12:06:10.723127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.155 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.723272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.155 [2024-11-18 12:06:10.723310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.155 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.723455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.155 [2024-11-18 12:06:10.723500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.155 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.723625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.155 [2024-11-18 12:06:10.723672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.155 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.723840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.155 [2024-11-18 12:06:10.723892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.155 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.724019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.155 [2024-11-18 12:06:10.724057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.155 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.724247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.155 [2024-11-18 12:06:10.724311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.155 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.724506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.155 [2024-11-18 12:06:10.724558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.155 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.724668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.155 [2024-11-18 12:06:10.724712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.155 qpair failed and we were unable to recover it. 00:37:45.155 [2024-11-18 12:06:10.724915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.156 [2024-11-18 12:06:10.724977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.156 qpair failed and we were unable to recover it. 00:37:45.156 [2024-11-18 12:06:10.725141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.156 [2024-11-18 12:06:10.725198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.156 qpair failed and we were unable to recover it. 00:37:45.156 [2024-11-18 12:06:10.725323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.156 [2024-11-18 12:06:10.725362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.156 qpair failed and we were unable to recover it. 00:37:45.156 [2024-11-18 12:06:10.725530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.156 [2024-11-18 12:06:10.725580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.156 qpair failed and we were unable to recover it. 00:37:45.156 [2024-11-18 12:06:10.725776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.156 [2024-11-18 12:06:10.725832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.156 qpair failed and we were unable to recover it. 00:37:45.156 [2024-11-18 12:06:10.726025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.156 [2024-11-18 12:06:10.726088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.156 qpair failed and we were unable to recover it. 00:37:45.156 [2024-11-18 12:06:10.726267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.156 [2024-11-18 12:06:10.726326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.156 qpair failed and we were unable to recover it. 00:37:45.156 [2024-11-18 12:06:10.726433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.156 [2024-11-18 12:06:10.726468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.156 qpair failed and we were unable to recover it. 00:37:45.156 [2024-11-18 12:06:10.726620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.156 [2024-11-18 12:06:10.726684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.156 qpair failed and we were unable to recover it. 00:37:45.156 [2024-11-18 12:06:10.726893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.156 [2024-11-18 12:06:10.726932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.156 qpair failed and we were unable to recover it. 00:37:45.156 [2024-11-18 12:06:10.727126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.156 [2024-11-18 12:06:10.727165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.156 qpair failed and we were unable to recover it. 00:37:45.156 [2024-11-18 12:06:10.727284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.156 [2024-11-18 12:06:10.727323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.156 qpair failed and we were unable to recover it. 00:37:45.156 [2024-11-18 12:06:10.727476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.156 [2024-11-18 12:06:10.727543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.156 qpair failed and we were unable to recover it. 00:37:45.156 [2024-11-18 12:06:10.727665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.156 [2024-11-18 12:06:10.727713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.156 qpair failed and we were unable to recover it. 00:37:45.156 [2024-11-18 12:06:10.727915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.156 [2024-11-18 12:06:10.727978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.156 qpair failed and we were unable to recover it. 00:37:45.156 [2024-11-18 12:06:10.728171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.156 [2024-11-18 12:06:10.728215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.156 qpair failed and we were unable to recover it. 00:37:45.156 [2024-11-18 12:06:10.728356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.156 [2024-11-18 12:06:10.728407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.156 qpair failed and we were unable to recover it. 00:37:45.156 [2024-11-18 12:06:10.728581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.156 [2024-11-18 12:06:10.728618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.156 qpair failed and we were unable to recover it. 00:37:45.156 [2024-11-18 12:06:10.728762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.156 [2024-11-18 12:06:10.728815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.156 qpair failed and we were unable to recover it. 00:37:45.156 [2024-11-18 12:06:10.728988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.156 [2024-11-18 12:06:10.729036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.156 qpair failed and we were unable to recover it. 00:37:45.156 [2024-11-18 12:06:10.729198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.156 [2024-11-18 12:06:10.729237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.156 qpair failed and we were unable to recover it. 00:37:45.156 [2024-11-18 12:06:10.729398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.156 [2024-11-18 12:06:10.729433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.156 qpair failed and we were unable to recover it. 00:37:45.156 [2024-11-18 12:06:10.729550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.156 [2024-11-18 12:06:10.729585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.156 qpair failed and we were unable to recover it. 00:37:45.156 [2024-11-18 12:06:10.729747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.156 [2024-11-18 12:06:10.729780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.156 qpair failed and we were unable to recover it. 00:37:45.156 [2024-11-18 12:06:10.729909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.156 [2024-11-18 12:06:10.729954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.156 qpair failed and we were unable to recover it. 00:37:45.156 [2024-11-18 12:06:10.730098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.156 [2024-11-18 12:06:10.730150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.156 qpair failed and we were unable to recover it. 00:37:45.156 [2024-11-18 12:06:10.730314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.156 [2024-11-18 12:06:10.730373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.156 qpair failed and we were unable to recover it. 00:37:45.156 [2024-11-18 12:06:10.730474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.156 [2024-11-18 12:06:10.730515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.156 qpair failed and we were unable to recover it. 00:37:45.156 [2024-11-18 12:06:10.730620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.156 [2024-11-18 12:06:10.730654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.156 qpair failed and we were unable to recover it. 00:37:45.156 [2024-11-18 12:06:10.730808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.156 [2024-11-18 12:06:10.730861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.156 qpair failed and we were unable to recover it. 00:37:45.156 [2024-11-18 12:06:10.730971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.156 [2024-11-18 12:06:10.731006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.156 qpair failed and we were unable to recover it. 00:37:45.156 [2024-11-18 12:06:10.731190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.156 [2024-11-18 12:06:10.731238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.156 qpair failed and we were unable to recover it. 00:37:45.156 [2024-11-18 12:06:10.731378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.156 [2024-11-18 12:06:10.731414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.156 qpair failed and we were unable to recover it. 00:37:45.156 [2024-11-18 12:06:10.731553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.156 [2024-11-18 12:06:10.731594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.156 qpair failed and we were unable to recover it. 00:37:45.156 [2024-11-18 12:06:10.731763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.156 [2024-11-18 12:06:10.731802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.156 qpair failed and we were unable to recover it. 00:37:45.156 [2024-11-18 12:06:10.731950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.156 [2024-11-18 12:06:10.731998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.156 qpair failed and we were unable to recover it. 00:37:45.156 [2024-11-18 12:06:10.732139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.156 [2024-11-18 12:06:10.732177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.156 qpair failed and we were unable to recover it. 00:37:45.156 [2024-11-18 12:06:10.732358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.732414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.157 qpair failed and we were unable to recover it. 00:37:45.157 [2024-11-18 12:06:10.732535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.732571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.157 qpair failed and we were unable to recover it. 00:37:45.157 [2024-11-18 12:06:10.732724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.732787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.157 qpair failed and we were unable to recover it. 00:37:45.157 [2024-11-18 12:06:10.732927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.732993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.157 qpair failed and we were unable to recover it. 00:37:45.157 [2024-11-18 12:06:10.733124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.733164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.157 qpair failed and we were unable to recover it. 00:37:45.157 [2024-11-18 12:06:10.733320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.733364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.157 qpair failed and we were unable to recover it. 00:37:45.157 [2024-11-18 12:06:10.733507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.733544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.157 qpair failed and we were unable to recover it. 00:37:45.157 [2024-11-18 12:06:10.733689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.733724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.157 qpair failed and we were unable to recover it. 00:37:45.157 [2024-11-18 12:06:10.733913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.733982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.157 qpair failed and we were unable to recover it. 00:37:45.157 [2024-11-18 12:06:10.734217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.734254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.157 qpair failed and we were unable to recover it. 00:37:45.157 [2024-11-18 12:06:10.734372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.734409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.157 qpair failed and we were unable to recover it. 00:37:45.157 [2024-11-18 12:06:10.734579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.734624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.157 qpair failed and we were unable to recover it. 00:37:45.157 [2024-11-18 12:06:10.734780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.734836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.157 qpair failed and we were unable to recover it. 00:37:45.157 [2024-11-18 12:06:10.734992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.735044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.157 qpair failed and we were unable to recover it. 00:37:45.157 [2024-11-18 12:06:10.735231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.735284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.157 qpair failed and we were unable to recover it. 00:37:45.157 [2024-11-18 12:06:10.735418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.735471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.157 qpair failed and we were unable to recover it. 00:37:45.157 [2024-11-18 12:06:10.735651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.735698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.157 qpair failed and we were unable to recover it. 00:37:45.157 [2024-11-18 12:06:10.735888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.735941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.157 qpair failed and we were unable to recover it. 00:37:45.157 [2024-11-18 12:06:10.736126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.736215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.157 qpair failed and we were unable to recover it. 00:37:45.157 [2024-11-18 12:06:10.736360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.736397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.157 qpair failed and we were unable to recover it. 00:37:45.157 [2024-11-18 12:06:10.736607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.736644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.157 qpair failed and we were unable to recover it. 00:37:45.157 [2024-11-18 12:06:10.736794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.736862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.157 qpair failed and we were unable to recover it. 00:37:45.157 [2024-11-18 12:06:10.737045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.737104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.157 qpair failed and we were unable to recover it. 00:37:45.157 [2024-11-18 12:06:10.737284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.737322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.157 qpair failed and we were unable to recover it. 00:37:45.157 [2024-11-18 12:06:10.737507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.737555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.157 qpair failed and we were unable to recover it. 00:37:45.157 [2024-11-18 12:06:10.737732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.737794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.157 qpair failed and we were unable to recover it. 00:37:45.157 [2024-11-18 12:06:10.737972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.738025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.157 qpair failed and we were unable to recover it. 00:37:45.157 [2024-11-18 12:06:10.738182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.738252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.157 qpair failed and we were unable to recover it. 00:37:45.157 [2024-11-18 12:06:10.738439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.738474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.157 qpair failed and we were unable to recover it. 00:37:45.157 [2024-11-18 12:06:10.738605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.738639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.157 qpair failed and we were unable to recover it. 00:37:45.157 [2024-11-18 12:06:10.738747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.738799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.157 qpair failed and we were unable to recover it. 00:37:45.157 [2024-11-18 12:06:10.739038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.739105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.157 qpair failed and we were unable to recover it. 00:37:45.157 [2024-11-18 12:06:10.739308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.739358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.157 qpair failed and we were unable to recover it. 00:37:45.157 [2024-11-18 12:06:10.739483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.739545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.157 qpair failed and we were unable to recover it. 00:37:45.157 [2024-11-18 12:06:10.739653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.739688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.157 qpair failed and we were unable to recover it. 00:37:45.157 [2024-11-18 12:06:10.739835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.739896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.157 qpair failed and we were unable to recover it. 00:37:45.157 [2024-11-18 12:06:10.740054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.740113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.157 qpair failed and we were unable to recover it. 00:37:45.157 [2024-11-18 12:06:10.740283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.157 [2024-11-18 12:06:10.740320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-11-18 12:06:10.740447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.158 [2024-11-18 12:06:10.740485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-11-18 12:06:10.740673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.158 [2024-11-18 12:06:10.740707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-11-18 12:06:10.740901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.158 [2024-11-18 12:06:10.740938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-11-18 12:06:10.741093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.158 [2024-11-18 12:06:10.741131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-11-18 12:06:10.741317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.158 [2024-11-18 12:06:10.741354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-11-18 12:06:10.741540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.158 [2024-11-18 12:06:10.741590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-11-18 12:06:10.741748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.158 [2024-11-18 12:06:10.741796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-11-18 12:06:10.741923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.158 [2024-11-18 12:06:10.741959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-11-18 12:06:10.742077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.158 [2024-11-18 12:06:10.742113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-11-18 12:06:10.742275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.158 [2024-11-18 12:06:10.742312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-11-18 12:06:10.742499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.158 [2024-11-18 12:06:10.742566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-11-18 12:06:10.742690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.158 [2024-11-18 12:06:10.742725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-11-18 12:06:10.742831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.158 [2024-11-18 12:06:10.742866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-11-18 12:06:10.743013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.158 [2024-11-18 12:06:10.743065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-11-18 12:06:10.743183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.158 [2024-11-18 12:06:10.743221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-11-18 12:06:10.743421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.158 [2024-11-18 12:06:10.743458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-11-18 12:06:10.743642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.158 [2024-11-18 12:06:10.743691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-11-18 12:06:10.743812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.158 [2024-11-18 12:06:10.743854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-11-18 12:06:10.744006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.158 [2024-11-18 12:06:10.744079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-11-18 12:06:10.744281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.158 [2024-11-18 12:06:10.744336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-11-18 12:06:10.744477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.158 [2024-11-18 12:06:10.744525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-11-18 12:06:10.744677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.158 [2024-11-18 12:06:10.744725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-11-18 12:06:10.744889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.158 [2024-11-18 12:06:10.744945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-11-18 12:06:10.745061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.158 [2024-11-18 12:06:10.745097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-11-18 12:06:10.745238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.158 [2024-11-18 12:06:10.745297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-11-18 12:06:10.745454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.158 [2024-11-18 12:06:10.745520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-11-18 12:06:10.745687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.158 [2024-11-18 12:06:10.745748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-11-18 12:06:10.745879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.158 [2024-11-18 12:06:10.745916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-11-18 12:06:10.746131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.158 [2024-11-18 12:06:10.746169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-11-18 12:06:10.746321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.158 [2024-11-18 12:06:10.746360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-11-18 12:06:10.746530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.158 [2024-11-18 12:06:10.746565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-11-18 12:06:10.746710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.158 [2024-11-18 12:06:10.746748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-11-18 12:06:10.746858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.158 [2024-11-18 12:06:10.746901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-11-18 12:06:10.747085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.158 [2024-11-18 12:06:10.747143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-11-18 12:06:10.747333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.158 [2024-11-18 12:06:10.747384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-11-18 12:06:10.747520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.158 [2024-11-18 12:06:10.747568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-11-18 12:06:10.747697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.159 [2024-11-18 12:06:10.747744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-11-18 12:06:10.747913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.159 [2024-11-18 12:06:10.747950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-11-18 12:06:10.748064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.159 [2024-11-18 12:06:10.748099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-11-18 12:06:10.748229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.159 [2024-11-18 12:06:10.748263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-11-18 12:06:10.748403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.159 [2024-11-18 12:06:10.748436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-11-18 12:06:10.748584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.159 [2024-11-18 12:06:10.748621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-11-18 12:06:10.748774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.159 [2024-11-18 12:06:10.748823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-11-18 12:06:10.748985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.159 [2024-11-18 12:06:10.749021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-11-18 12:06:10.749195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.159 [2024-11-18 12:06:10.749229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-11-18 12:06:10.749386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.159 [2024-11-18 12:06:10.749423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-11-18 12:06:10.749572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.159 [2024-11-18 12:06:10.749607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-11-18 12:06:10.749721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.159 [2024-11-18 12:06:10.749755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-11-18 12:06:10.749936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.159 [2024-11-18 12:06:10.750007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-11-18 12:06:10.750236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.159 [2024-11-18 12:06:10.750295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-11-18 12:06:10.750440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.159 [2024-11-18 12:06:10.750477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-11-18 12:06:10.750633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.159 [2024-11-18 12:06:10.750668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-11-18 12:06:10.750801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.159 [2024-11-18 12:06:10.750835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-11-18 12:06:10.751041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.159 [2024-11-18 12:06:10.751074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-11-18 12:06:10.751280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.159 [2024-11-18 12:06:10.751317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-11-18 12:06:10.751483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.159 [2024-11-18 12:06:10.751547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-11-18 12:06:10.751658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.159 [2024-11-18 12:06:10.751692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-11-18 12:06:10.751838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.159 [2024-11-18 12:06:10.751919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-11-18 12:06:10.752148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.159 [2024-11-18 12:06:10.752208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-11-18 12:06:10.752362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.159 [2024-11-18 12:06:10.752397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-11-18 12:06:10.752556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.159 [2024-11-18 12:06:10.752611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-11-18 12:06:10.752737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.159 [2024-11-18 12:06:10.752785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-11-18 12:06:10.752941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.159 [2024-11-18 12:06:10.752978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-11-18 12:06:10.753124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.159 [2024-11-18 12:06:10.753161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-11-18 12:06:10.753303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.159 [2024-11-18 12:06:10.753337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-11-18 12:06:10.753444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.159 [2024-11-18 12:06:10.753484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-11-18 12:06:10.753609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.159 [2024-11-18 12:06:10.753643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-11-18 12:06:10.753806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.159 [2024-11-18 12:06:10.753843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-11-18 12:06:10.753989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.159 [2024-11-18 12:06:10.754027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-11-18 12:06:10.754198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.159 [2024-11-18 12:06:10.754252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.754405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.160 [2024-11-18 12:06:10.754440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.754577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.160 [2024-11-18 12:06:10.754624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.754770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.160 [2024-11-18 12:06:10.754806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.754974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.160 [2024-11-18 12:06:10.755055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.755267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.160 [2024-11-18 12:06:10.755306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.755499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.160 [2024-11-18 12:06:10.755554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.755687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.160 [2024-11-18 12:06:10.755721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.755867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.160 [2024-11-18 12:06:10.755919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.756160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.160 [2024-11-18 12:06:10.756217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.756329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.160 [2024-11-18 12:06:10.756366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.756513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.160 [2024-11-18 12:06:10.756548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.756736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.160 [2024-11-18 12:06:10.756779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.756936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.160 [2024-11-18 12:06:10.756991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.757170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.160 [2024-11-18 12:06:10.757237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.757353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.160 [2024-11-18 12:06:10.757390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.757551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.160 [2024-11-18 12:06:10.757597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.757756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.160 [2024-11-18 12:06:10.757814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.757950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.160 [2024-11-18 12:06:10.758006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.758115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.160 [2024-11-18 12:06:10.758149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.758282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.160 [2024-11-18 12:06:10.758316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.758426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.160 [2024-11-18 12:06:10.758460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.758610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.160 [2024-11-18 12:06:10.758658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.758800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.160 [2024-11-18 12:06:10.758836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.758961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.160 [2024-11-18 12:06:10.758996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.759112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.160 [2024-11-18 12:06:10.759146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.759288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.160 [2024-11-18 12:06:10.759323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.759477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.160 [2024-11-18 12:06:10.759533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.759681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.160 [2024-11-18 12:06:10.759721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.759905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.160 [2024-11-18 12:06:10.759955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.760124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.160 [2024-11-18 12:06:10.760161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.760297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.160 [2024-11-18 12:06:10.760331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.760508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.160 [2024-11-18 12:06:10.760543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.760662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.160 [2024-11-18 12:06:10.760716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.760872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.160 [2024-11-18 12:06:10.760920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.761062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.160 [2024-11-18 12:06:10.761123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.761236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.160 [2024-11-18 12:06:10.761274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.761448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.160 [2024-11-18 12:06:10.761486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.160 qpair failed and we were unable to recover it. 00:37:45.160 [2024-11-18 12:06:10.761632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.761667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-11-18 12:06:10.761885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.761952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-11-18 12:06:10.762133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.762194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-11-18 12:06:10.762331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.762366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-11-18 12:06:10.762534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.762569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-11-18 12:06:10.762752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.762815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-11-18 12:06:10.763023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.763075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-11-18 12:06:10.763239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.763298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-11-18 12:06:10.763481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.763522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-11-18 12:06:10.763633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.763667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-11-18 12:06:10.763783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.763836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-11-18 12:06:10.764006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.764044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-11-18 12:06:10.764238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.764276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-11-18 12:06:10.764434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.764488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-11-18 12:06:10.764671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.764720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-11-18 12:06:10.764871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.764930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-11-18 12:06:10.765160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.765218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-11-18 12:06:10.765378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.765418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-11-18 12:06:10.765587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.765623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-11-18 12:06:10.765745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.765785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-11-18 12:06:10.765944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.765998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-11-18 12:06:10.766213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.766267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-11-18 12:06:10.766441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.766479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-11-18 12:06:10.766637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.766672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-11-18 12:06:10.766848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.766905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-11-18 12:06:10.767016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.767056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-11-18 12:06:10.767226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.767288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-11-18 12:06:10.767428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.767466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-11-18 12:06:10.767616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.767650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-11-18 12:06:10.767804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.767841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-11-18 12:06:10.768028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.768086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-11-18 12:06:10.768253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.768291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-11-18 12:06:10.768449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.768483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-11-18 12:06:10.768628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.768665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-11-18 12:06:10.768832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.768872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-11-18 12:06:10.769078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.769116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-11-18 12:06:10.769279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.769316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.161 [2024-11-18 12:06:10.769485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.161 [2024-11-18 12:06:10.769562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.161 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.769685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.162 [2024-11-18 12:06:10.769728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.162 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.769846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.162 [2024-11-18 12:06:10.769885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.162 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.770059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.162 [2024-11-18 12:06:10.770110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.162 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.770264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.162 [2024-11-18 12:06:10.770330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.162 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.770477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.162 [2024-11-18 12:06:10.770522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.162 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.770683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.162 [2024-11-18 12:06:10.770717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.162 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.770890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.162 [2024-11-18 12:06:10.770955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.162 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.771084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.162 [2024-11-18 12:06:10.771121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.162 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.771265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.162 [2024-11-18 12:06:10.771302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.162 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.771469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.162 [2024-11-18 12:06:10.771525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.162 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.771638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.162 [2024-11-18 12:06:10.771675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.162 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.771843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.162 [2024-11-18 12:06:10.771897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.162 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.772061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.162 [2024-11-18 12:06:10.772118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.162 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.772313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.162 [2024-11-18 12:06:10.772348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.162 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.772481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.162 [2024-11-18 12:06:10.772522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.162 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.772641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.162 [2024-11-18 12:06:10.772676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.162 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.772797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.162 [2024-11-18 12:06:10.772831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.162 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.772995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.162 [2024-11-18 12:06:10.773029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.162 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.773149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.162 [2024-11-18 12:06:10.773185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.162 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.773335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.162 [2024-11-18 12:06:10.773369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.162 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.773531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.162 [2024-11-18 12:06:10.773580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.162 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.773703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.162 [2024-11-18 12:06:10.773740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.162 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.773892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.162 [2024-11-18 12:06:10.773927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.162 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.774083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.162 [2024-11-18 12:06:10.774117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.162 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.774298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.162 [2024-11-18 12:06:10.774357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.162 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.774563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.162 [2024-11-18 12:06:10.774612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.162 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.774809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.162 [2024-11-18 12:06:10.774874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.162 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.775074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.162 [2024-11-18 12:06:10.775134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.162 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.775251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.162 [2024-11-18 12:06:10.775288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.162 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.775429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.162 [2024-11-18 12:06:10.775467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.162 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.775602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.162 [2024-11-18 12:06:10.775653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.162 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.775798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.162 [2024-11-18 12:06:10.775839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.162 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.776010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.162 [2024-11-18 12:06:10.776088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.162 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.776277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.162 [2024-11-18 12:06:10.776341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.162 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.776498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.162 [2024-11-18 12:06:10.776552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.162 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.776695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.162 [2024-11-18 12:06:10.776729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.162 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.776946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.162 [2024-11-18 12:06:10.777012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.162 qpair failed and we were unable to recover it. 00:37:45.162 [2024-11-18 12:06:10.777158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.163 [2024-11-18 12:06:10.777224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.163 qpair failed and we were unable to recover it. 00:37:45.163 [2024-11-18 12:06:10.777359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.163 [2024-11-18 12:06:10.777408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.163 qpair failed and we were unable to recover it. 00:37:45.163 [2024-11-18 12:06:10.777615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.163 [2024-11-18 12:06:10.777664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.163 qpair failed and we were unable to recover it. 00:37:45.163 [2024-11-18 12:06:10.777798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.163 [2024-11-18 12:06:10.777836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.163 qpair failed and we were unable to recover it. 00:37:45.163 [2024-11-18 12:06:10.777984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.163 [2024-11-18 12:06:10.778038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.163 qpair failed and we were unable to recover it. 00:37:45.163 [2024-11-18 12:06:10.778157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.163 [2024-11-18 12:06:10.778192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.163 qpair failed and we were unable to recover it. 00:37:45.163 [2024-11-18 12:06:10.778329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.163 [2024-11-18 12:06:10.778363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.163 qpair failed and we were unable to recover it. 00:37:45.163 [2024-11-18 12:06:10.778533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.163 [2024-11-18 12:06:10.778568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.163 qpair failed and we were unable to recover it. 00:37:45.163 [2024-11-18 12:06:10.778680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.163 [2024-11-18 12:06:10.778713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.163 qpair failed and we were unable to recover it. 00:37:45.163 [2024-11-18 12:06:10.778820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.163 [2024-11-18 12:06:10.778855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.163 qpair failed and we were unable to recover it. 00:37:45.163 [2024-11-18 12:06:10.779012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.163 [2024-11-18 12:06:10.779046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.163 qpair failed and we were unable to recover it. 00:37:45.163 [2024-11-18 12:06:10.779184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.163 [2024-11-18 12:06:10.779219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.163 qpair failed and we were unable to recover it. 00:37:45.163 [2024-11-18 12:06:10.779370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.163 [2024-11-18 12:06:10.779418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.163 qpair failed and we were unable to recover it. 00:37:45.163 [2024-11-18 12:06:10.779586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.163 [2024-11-18 12:06:10.779635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.163 qpair failed and we were unable to recover it. 00:37:45.163 [2024-11-18 12:06:10.779759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.163 [2024-11-18 12:06:10.779795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.163 qpair failed and we were unable to recover it. 00:37:45.163 [2024-11-18 12:06:10.779937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.163 [2024-11-18 12:06:10.779971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.163 qpair failed and we were unable to recover it. 00:37:45.163 [2024-11-18 12:06:10.780118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.163 [2024-11-18 12:06:10.780154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.163 qpair failed and we were unable to recover it. 00:37:45.163 [2024-11-18 12:06:10.780304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.163 [2024-11-18 12:06:10.780340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.163 qpair failed and we were unable to recover it. 00:37:45.163 [2024-11-18 12:06:10.780476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.163 [2024-11-18 12:06:10.780524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.163 qpair failed and we were unable to recover it. 00:37:45.163 [2024-11-18 12:06:10.780726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.163 [2024-11-18 12:06:10.780779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.163 qpair failed and we were unable to recover it. 00:37:45.163 [2024-11-18 12:06:10.780947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.163 [2024-11-18 12:06:10.780988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.163 qpair failed and we were unable to recover it. 00:37:45.163 [2024-11-18 12:06:10.781136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.163 [2024-11-18 12:06:10.781176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.163 qpair failed and we were unable to recover it. 00:37:45.163 [2024-11-18 12:06:10.781336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.163 [2024-11-18 12:06:10.781383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.163 qpair failed and we were unable to recover it. 00:37:45.163 [2024-11-18 12:06:10.781511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.163 [2024-11-18 12:06:10.781565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.163 qpair failed and we were unable to recover it. 00:37:45.163 [2024-11-18 12:06:10.781687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.163 [2024-11-18 12:06:10.781736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.163 qpair failed and we were unable to recover it. 00:37:45.163 [2024-11-18 12:06:10.781983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.163 [2024-11-18 12:06:10.782023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.163 qpair failed and we were unable to recover it. 00:37:45.163 [2024-11-18 12:06:10.782178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.163 [2024-11-18 12:06:10.782216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.163 qpair failed and we were unable to recover it. 00:37:45.163 [2024-11-18 12:06:10.782371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.163 [2024-11-18 12:06:10.782409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.163 qpair failed and we were unable to recover it. 00:37:45.163 [2024-11-18 12:06:10.782587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.163 [2024-11-18 12:06:10.782622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.163 qpair failed and we were unable to recover it. 00:37:45.163 [2024-11-18 12:06:10.782840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.163 [2024-11-18 12:06:10.782895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.163 qpair failed and we were unable to recover it. 00:37:45.163 [2024-11-18 12:06:10.783010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.163 [2024-11-18 12:06:10.783048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.163 qpair failed and we were unable to recover it. 00:37:45.163 [2024-11-18 12:06:10.783194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.163 [2024-11-18 12:06:10.783232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.163 qpair failed and we were unable to recover it. 00:37:45.163 [2024-11-18 12:06:10.783363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.163 [2024-11-18 12:06:10.783414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.163 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.783525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.164 [2024-11-18 12:06:10.783560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.164 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.783698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.164 [2024-11-18 12:06:10.783732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.164 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.783868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.164 [2024-11-18 12:06:10.783911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.164 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.784060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.164 [2024-11-18 12:06:10.784108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.164 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.784307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.164 [2024-11-18 12:06:10.784375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.164 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.784486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.164 [2024-11-18 12:06:10.784530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.164 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.784641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.164 [2024-11-18 12:06:10.784677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.164 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.784842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.164 [2024-11-18 12:06:10.784889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.164 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.785079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.164 [2024-11-18 12:06:10.785141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.164 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.785298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.164 [2024-11-18 12:06:10.785340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.164 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.785464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.164 [2024-11-18 12:06:10.785512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.164 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.785639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.164 [2024-11-18 12:06:10.785675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.164 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.785813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.164 [2024-11-18 12:06:10.785861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.164 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.786060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.164 [2024-11-18 12:06:10.786128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.164 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.786318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.164 [2024-11-18 12:06:10.786379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.164 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.786546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.164 [2024-11-18 12:06:10.786581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.164 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.786703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.164 [2024-11-18 12:06:10.786737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.164 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.786843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.164 [2024-11-18 12:06:10.786896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.164 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.787052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.164 [2024-11-18 12:06:10.787092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.164 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.787255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.164 [2024-11-18 12:06:10.787325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.164 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.787494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.164 [2024-11-18 12:06:10.787547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.164 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.787666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.164 [2024-11-18 12:06:10.787703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.164 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.787855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.164 [2024-11-18 12:06:10.787890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.164 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.788099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.164 [2024-11-18 12:06:10.788139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.164 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.788320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.164 [2024-11-18 12:06:10.788380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.164 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.788521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.164 [2024-11-18 12:06:10.788557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.164 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.788704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.164 [2024-11-18 12:06:10.788742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.164 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.788923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.164 [2024-11-18 12:06:10.788991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.164 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.789100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.164 [2024-11-18 12:06:10.789135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.164 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.789317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.164 [2024-11-18 12:06:10.789379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.164 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.789569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.164 [2024-11-18 12:06:10.789618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.164 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.789757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.164 [2024-11-18 12:06:10.789814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.164 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.790007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.164 [2024-11-18 12:06:10.790050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.164 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.790269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.164 [2024-11-18 12:06:10.790327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.164 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.790447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.164 [2024-11-18 12:06:10.790487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.164 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.790650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.164 [2024-11-18 12:06:10.790704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.164 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.790850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.164 [2024-11-18 12:06:10.790922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.164 qpair failed and we were unable to recover it. 00:37:45.164 [2024-11-18 12:06:10.791133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.791170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.165 qpair failed and we were unable to recover it. 00:37:45.165 [2024-11-18 12:06:10.791336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.791398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.165 qpair failed and we were unable to recover it. 00:37:45.165 [2024-11-18 12:06:10.791553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.791592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.165 qpair failed and we were unable to recover it. 00:37:45.165 [2024-11-18 12:06:10.791707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.791746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.165 qpair failed and we were unable to recover it. 00:37:45.165 [2024-11-18 12:06:10.791873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.791918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.165 qpair failed and we were unable to recover it. 00:37:45.165 [2024-11-18 12:06:10.792101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.792170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.165 qpair failed and we were unable to recover it. 00:37:45.165 [2024-11-18 12:06:10.792299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.792333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.165 qpair failed and we were unable to recover it. 00:37:45.165 [2024-11-18 12:06:10.792462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.792504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.165 qpair failed and we were unable to recover it. 00:37:45.165 [2024-11-18 12:06:10.792687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.792725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.165 qpair failed and we were unable to recover it. 00:37:45.165 [2024-11-18 12:06:10.792873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.792911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.165 qpair failed and we were unable to recover it. 00:37:45.165 [2024-11-18 12:06:10.793026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.793066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.165 qpair failed and we were unable to recover it. 00:37:45.165 [2024-11-18 12:06:10.793247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.793302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.165 qpair failed and we were unable to recover it. 00:37:45.165 [2024-11-18 12:06:10.793433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.793485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.165 qpair failed and we were unable to recover it. 00:37:45.165 [2024-11-18 12:06:10.793638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.793686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.165 qpair failed and we were unable to recover it. 00:37:45.165 [2024-11-18 12:06:10.793856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.793894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.165 qpair failed and we were unable to recover it. 00:37:45.165 [2024-11-18 12:06:10.794090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.794151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.165 qpair failed and we were unable to recover it. 00:37:45.165 [2024-11-18 12:06:10.794323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.794362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.165 qpair failed and we were unable to recover it. 00:37:45.165 [2024-11-18 12:06:10.794473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.794537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.165 qpair failed and we were unable to recover it. 00:37:45.165 [2024-11-18 12:06:10.794676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.794710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.165 qpair failed and we were unable to recover it. 00:37:45.165 [2024-11-18 12:06:10.794881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.794958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.165 qpair failed and we were unable to recover it. 00:37:45.165 [2024-11-18 12:06:10.795186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.795247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.165 qpair failed and we were unable to recover it. 00:37:45.165 [2024-11-18 12:06:10.795359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.795395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.165 qpair failed and we were unable to recover it. 00:37:45.165 [2024-11-18 12:06:10.795549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.795604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.165 qpair failed and we were unable to recover it. 00:37:45.165 [2024-11-18 12:06:10.795737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.795784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.165 qpair failed and we were unable to recover it. 00:37:45.165 [2024-11-18 12:06:10.795935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.795971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.165 qpair failed and we were unable to recover it. 00:37:45.165 [2024-11-18 12:06:10.796110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.796145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.165 qpair failed and we were unable to recover it. 00:37:45.165 [2024-11-18 12:06:10.796280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.796314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.165 qpair failed and we were unable to recover it. 00:37:45.165 [2024-11-18 12:06:10.796442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.796476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.165 qpair failed and we were unable to recover it. 00:37:45.165 [2024-11-18 12:06:10.796587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.796622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.165 qpair failed and we were unable to recover it. 00:37:45.165 [2024-11-18 12:06:10.796758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.796810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.165 qpair failed and we were unable to recover it. 00:37:45.165 [2024-11-18 12:06:10.796947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.797000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.165 qpair failed and we were unable to recover it. 00:37:45.165 [2024-11-18 12:06:10.797155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.797213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.165 qpair failed and we were unable to recover it. 00:37:45.165 [2024-11-18 12:06:10.797349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.797383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.165 qpair failed and we were unable to recover it. 00:37:45.165 [2024-11-18 12:06:10.797516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.797551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.165 qpair failed and we were unable to recover it. 00:37:45.165 [2024-11-18 12:06:10.797656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.797691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.165 qpair failed and we were unable to recover it. 00:37:45.165 [2024-11-18 12:06:10.797868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.797901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.165 qpair failed and we were unable to recover it. 00:37:45.165 [2024-11-18 12:06:10.798041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.798074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.165 qpair failed and we were unable to recover it. 00:37:45.165 [2024-11-18 12:06:10.798218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.165 [2024-11-18 12:06:10.798252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.166 qpair failed and we were unable to recover it. 00:37:45.166 [2024-11-18 12:06:10.798405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.166 [2024-11-18 12:06:10.798441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.166 qpair failed and we were unable to recover it. 00:37:45.166 [2024-11-18 12:06:10.798582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.166 [2024-11-18 12:06:10.798631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.166 qpair failed and we were unable to recover it. 00:37:45.166 [2024-11-18 12:06:10.798746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.166 [2024-11-18 12:06:10.798782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.166 qpair failed and we were unable to recover it. 00:37:45.166 [2024-11-18 12:06:10.798892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.166 [2024-11-18 12:06:10.798927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.166 qpair failed and we were unable to recover it. 00:37:45.166 [2024-11-18 12:06:10.799046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.166 [2024-11-18 12:06:10.799084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.166 qpair failed and we were unable to recover it. 00:37:45.166 [2024-11-18 12:06:10.799280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.166 [2024-11-18 12:06:10.799339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.166 qpair failed and we were unable to recover it. 00:37:45.166 [2024-11-18 12:06:10.799542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.166 [2024-11-18 12:06:10.799578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.166 qpair failed and we were unable to recover it. 00:37:45.166 [2024-11-18 12:06:10.799720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.166 [2024-11-18 12:06:10.799761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.166 qpair failed and we were unable to recover it. 00:37:45.166 [2024-11-18 12:06:10.799917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.166 [2024-11-18 12:06:10.799956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.166 qpair failed and we were unable to recover it. 00:37:45.166 [2024-11-18 12:06:10.800115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.166 [2024-11-18 12:06:10.800178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.166 qpair failed and we were unable to recover it. 00:37:45.166 [2024-11-18 12:06:10.800404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.166 [2024-11-18 12:06:10.800442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.166 qpair failed and we were unable to recover it. 00:37:45.166 [2024-11-18 12:06:10.800599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.166 [2024-11-18 12:06:10.800633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.166 qpair failed and we were unable to recover it. 00:37:45.166 [2024-11-18 12:06:10.800746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.166 [2024-11-18 12:06:10.800796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.166 qpair failed and we were unable to recover it. 00:37:45.166 [2024-11-18 12:06:10.800969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.166 [2024-11-18 12:06:10.801007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.166 qpair failed and we were unable to recover it. 00:37:45.166 [2024-11-18 12:06:10.801149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.166 [2024-11-18 12:06:10.801186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.166 qpair failed and we were unable to recover it. 00:37:45.166 [2024-11-18 12:06:10.801365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.166 [2024-11-18 12:06:10.801430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.166 qpair failed and we were unable to recover it. 00:37:45.166 [2024-11-18 12:06:10.801628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.166 [2024-11-18 12:06:10.801677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.166 qpair failed and we were unable to recover it. 00:37:45.166 [2024-11-18 12:06:10.801818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.166 [2024-11-18 12:06:10.801890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.166 qpair failed and we were unable to recover it. 00:37:45.166 [2024-11-18 12:06:10.802076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.166 [2024-11-18 12:06:10.802143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.166 qpair failed and we were unable to recover it. 00:37:45.166 [2024-11-18 12:06:10.802318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.166 [2024-11-18 12:06:10.802356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.166 qpair failed and we were unable to recover it. 00:37:45.166 [2024-11-18 12:06:10.802515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.166 [2024-11-18 12:06:10.802571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.166 qpair failed and we were unable to recover it. 00:37:45.166 [2024-11-18 12:06:10.803464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.166 [2024-11-18 12:06:10.803544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.166 qpair failed and we were unable to recover it. 00:37:45.166 [2024-11-18 12:06:10.803689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.166 [2024-11-18 12:06:10.803725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.166 qpair failed and we were unable to recover it. 00:37:45.166 [2024-11-18 12:06:10.803875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.166 [2024-11-18 12:06:10.803910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.166 qpair failed and we were unable to recover it. 00:37:45.166 [2024-11-18 12:06:10.804050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.166 [2024-11-18 12:06:10.804102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.166 qpair failed and we were unable to recover it. 00:37:45.166 [2024-11-18 12:06:10.804268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.166 [2024-11-18 12:06:10.804306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.166 qpair failed and we were unable to recover it. 00:37:45.166 [2024-11-18 12:06:10.804439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.166 [2024-11-18 12:06:10.804503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.166 qpair failed and we were unable to recover it. 00:37:45.166 [2024-11-18 12:06:10.804655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.166 [2024-11-18 12:06:10.804689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.166 qpair failed and we were unable to recover it. 00:37:45.166 [2024-11-18 12:06:10.804845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.166 [2024-11-18 12:06:10.804883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.166 qpair failed and we were unable to recover it. 00:37:45.166 [2024-11-18 12:06:10.805016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.166 [2024-11-18 12:06:10.805068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.166 qpair failed and we were unable to recover it. 00:37:45.166 [2024-11-18 12:06:10.805219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.166 [2024-11-18 12:06:10.805256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.166 qpair failed and we were unable to recover it. 00:37:45.166 [2024-11-18 12:06:10.805395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.166 [2024-11-18 12:06:10.805450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.166 qpair failed and we were unable to recover it. 00:37:45.166 [2024-11-18 12:06:10.805618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.166 [2024-11-18 12:06:10.805666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.166 qpair failed and we were unable to recover it. 00:37:45.166 [2024-11-18 12:06:10.805858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.166 [2024-11-18 12:06:10.805895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.166 qpair failed and we were unable to recover it. 00:37:45.166 [2024-11-18 12:06:10.806071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.166 [2024-11-18 12:06:10.806110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.166 qpair failed and we were unable to recover it. 00:37:45.166 [2024-11-18 12:06:10.806287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.166 [2024-11-18 12:06:10.806325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.167 qpair failed and we were unable to recover it. 00:37:45.167 [2024-11-18 12:06:10.806499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.167 [2024-11-18 12:06:10.806558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.167 qpair failed and we were unable to recover it. 00:37:45.167 [2024-11-18 12:06:10.806683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.167 [2024-11-18 12:06:10.806721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.167 qpair failed and we were unable to recover it. 00:37:45.167 [2024-11-18 12:06:10.806887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.167 [2024-11-18 12:06:10.806935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.167 qpair failed and we were unable to recover it. 00:37:45.167 [2024-11-18 12:06:10.807051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.167 [2024-11-18 12:06:10.807088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.167 qpair failed and we were unable to recover it. 00:37:45.167 [2024-11-18 12:06:10.807241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.167 [2024-11-18 12:06:10.807295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.167 qpair failed and we were unable to recover it. 00:37:45.167 [2024-11-18 12:06:10.807430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.167 [2024-11-18 12:06:10.807464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.167 qpair failed and we were unable to recover it. 00:37:45.167 [2024-11-18 12:06:10.807607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.167 [2024-11-18 12:06:10.807652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.167 qpair failed and we were unable to recover it. 00:37:45.167 [2024-11-18 12:06:10.807814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.167 [2024-11-18 12:06:10.807852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.167 qpair failed and we were unable to recover it. 00:37:45.167 [2024-11-18 12:06:10.808008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.167 [2024-11-18 12:06:10.808058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.167 qpair failed and we were unable to recover it. 00:37:45.167 [2024-11-18 12:06:10.808243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.167 [2024-11-18 12:06:10.808308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.167 qpair failed and we were unable to recover it. 00:37:45.167 [2024-11-18 12:06:10.808465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.167 [2024-11-18 12:06:10.808520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.167 qpair failed and we were unable to recover it. 00:37:45.167 [2024-11-18 12:06:10.808649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.167 [2024-11-18 12:06:10.808703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.167 qpair failed and we were unable to recover it. 00:37:45.167 [2024-11-18 12:06:10.808836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.167 [2024-11-18 12:06:10.808873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.167 qpair failed and we were unable to recover it. 00:37:45.167 [2024-11-18 12:06:10.809018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.167 [2024-11-18 12:06:10.809054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.167 qpair failed and we were unable to recover it. 00:37:45.167 [2024-11-18 12:06:10.809225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.167 [2024-11-18 12:06:10.809264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.167 qpair failed and we were unable to recover it. 00:37:45.167 [2024-11-18 12:06:10.809868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.167 [2024-11-18 12:06:10.809921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.167 qpair failed and we were unable to recover it. 00:37:45.167 [2024-11-18 12:06:10.810109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.167 [2024-11-18 12:06:10.810164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.167 qpair failed and we were unable to recover it. 00:37:45.167 [2024-11-18 12:06:10.810411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.167 [2024-11-18 12:06:10.810447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.167 qpair failed and we were unable to recover it. 00:37:45.167 [2024-11-18 12:06:10.810583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.167 [2024-11-18 12:06:10.810618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.167 qpair failed and we were unable to recover it. 00:37:45.167 [2024-11-18 12:06:10.810755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.167 [2024-11-18 12:06:10.810815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.167 qpair failed and we were unable to recover it. 00:37:45.167 [2024-11-18 12:06:10.811081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.167 [2024-11-18 12:06:10.811119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.167 qpair failed and we were unable to recover it. 00:37:45.167 [2024-11-18 12:06:10.811250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.167 [2024-11-18 12:06:10.811288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.167 qpair failed and we were unable to recover it. 00:37:45.167 [2024-11-18 12:06:10.811462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.167 [2024-11-18 12:06:10.811529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.167 qpair failed and we were unable to recover it. 00:37:45.167 [2024-11-18 12:06:10.811746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.167 [2024-11-18 12:06:10.811795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.167 qpair failed and we were unable to recover it. 00:37:45.167 [2024-11-18 12:06:10.811971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.167 [2024-11-18 12:06:10.812052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.167 qpair failed and we were unable to recover it. 00:37:45.167 [2024-11-18 12:06:10.812222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.167 [2024-11-18 12:06:10.812281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.167 qpair failed and we were unable to recover it. 00:37:45.167 [2024-11-18 12:06:10.812445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.167 [2024-11-18 12:06:10.812485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.167 qpair failed and we were unable to recover it. 00:37:45.167 [2024-11-18 12:06:10.812609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.167 [2024-11-18 12:06:10.812644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.167 qpair failed and we were unable to recover it. 00:37:45.167 [2024-11-18 12:06:10.812794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.167 [2024-11-18 12:06:10.812834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.167 qpair failed and we were unable to recover it. 00:37:45.167 [2024-11-18 12:06:10.813064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.167 [2024-11-18 12:06:10.813138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.167 qpair failed and we were unable to recover it. 00:37:45.167 [2024-11-18 12:06:10.813389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.167 [2024-11-18 12:06:10.813479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.167 qpair failed and we were unable to recover it. 00:37:45.167 [2024-11-18 12:06:10.813663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.167 [2024-11-18 12:06:10.813700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.167 qpair failed and we were unable to recover it. 00:37:45.167 [2024-11-18 12:06:10.813848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.167 [2024-11-18 12:06:10.813882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.167 qpair failed and we were unable to recover it. 00:37:45.167 [2024-11-18 12:06:10.814003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.167 [2024-11-18 12:06:10.814038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.167 qpair failed and we were unable to recover it. 00:37:45.167 [2024-11-18 12:06:10.814213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.167 [2024-11-18 12:06:10.814272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.167 qpair failed and we were unable to recover it. 00:37:45.167 [2024-11-18 12:06:10.814405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.167 [2024-11-18 12:06:10.814439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.167 qpair failed and we were unable to recover it. 00:37:45.167 [2024-11-18 12:06:10.814621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.168 [2024-11-18 12:06:10.814670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.168 qpair failed and we were unable to recover it. 00:37:45.168 [2024-11-18 12:06:10.814850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.168 [2024-11-18 12:06:10.814891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.168 qpair failed and we were unable to recover it. 00:37:45.168 [2024-11-18 12:06:10.815212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.168 [2024-11-18 12:06:10.815278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.168 qpair failed and we were unable to recover it. 00:37:45.168 [2024-11-18 12:06:10.815424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.168 [2024-11-18 12:06:10.815461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.168 qpair failed and we were unable to recover it. 00:37:45.168 [2024-11-18 12:06:10.815619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.168 [2024-11-18 12:06:10.815654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.168 qpair failed and we were unable to recover it. 00:37:45.168 [2024-11-18 12:06:10.815760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.168 [2024-11-18 12:06:10.815795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.168 qpair failed and we were unable to recover it. 00:37:45.168 [2024-11-18 12:06:10.815967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.168 [2024-11-18 12:06:10.816016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.168 qpair failed and we were unable to recover it. 00:37:45.168 [2024-11-18 12:06:10.816194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.168 [2024-11-18 12:06:10.816251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.168 qpair failed and we were unable to recover it. 00:37:45.168 [2024-11-18 12:06:10.816421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.168 [2024-11-18 12:06:10.816455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.168 qpair failed and we were unable to recover it. 00:37:45.168 [2024-11-18 12:06:10.816618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.168 [2024-11-18 12:06:10.816666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.168 qpair failed and we were unable to recover it. 00:37:45.168 [2024-11-18 12:06:10.816822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.168 [2024-11-18 12:06:10.816875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.168 qpair failed and we were unable to recover it. 00:37:45.168 [2024-11-18 12:06:10.817062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.168 [2024-11-18 12:06:10.817141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.168 qpair failed and we were unable to recover it. 00:37:45.168 [2024-11-18 12:06:10.817339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.168 [2024-11-18 12:06:10.817396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.168 qpair failed and we were unable to recover it. 00:37:45.168 [2024-11-18 12:06:10.817525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.168 [2024-11-18 12:06:10.817581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.168 qpair failed and we were unable to recover it. 00:37:45.168 [2024-11-18 12:06:10.817718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.168 [2024-11-18 12:06:10.817752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.168 qpair failed and we were unable to recover it. 00:37:45.168 [2024-11-18 12:06:10.817893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.168 [2024-11-18 12:06:10.817996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.168 qpair failed and we were unable to recover it. 00:37:45.168 [2024-11-18 12:06:10.818162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.168 [2024-11-18 12:06:10.818200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.168 qpair failed and we were unable to recover it. 00:37:45.168 [2024-11-18 12:06:10.818352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.168 [2024-11-18 12:06:10.818391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.168 qpair failed and we were unable to recover it. 00:37:45.168 [2024-11-18 12:06:10.818537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.168 [2024-11-18 12:06:10.818573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.168 qpair failed and we were unable to recover it. 00:37:45.168 [2024-11-18 12:06:10.818689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.168 [2024-11-18 12:06:10.818725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.168 qpair failed and we were unable to recover it. 00:37:45.168 [2024-11-18 12:06:10.818880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.168 [2024-11-18 12:06:10.818929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.168 qpair failed and we were unable to recover it. 00:37:45.168 [2024-11-18 12:06:10.819089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.168 [2024-11-18 12:06:10.819144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.168 qpair failed and we were unable to recover it. 00:37:45.168 [2024-11-18 12:06:10.819297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.168 [2024-11-18 12:06:10.819355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.168 qpair failed and we were unable to recover it. 00:37:45.168 [2024-11-18 12:06:10.819496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.168 [2024-11-18 12:06:10.819532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.168 qpair failed and we were unable to recover it. 00:37:45.168 [2024-11-18 12:06:10.819658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.168 [2024-11-18 12:06:10.819711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.168 qpair failed and we were unable to recover it. 00:37:45.168 [2024-11-18 12:06:10.819857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.168 [2024-11-18 12:06:10.819892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.168 qpair failed and we were unable to recover it. 00:37:45.168 [2024-11-18 12:06:10.819998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.168 [2024-11-18 12:06:10.820034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.168 qpair failed and we were unable to recover it. 00:37:45.168 [2024-11-18 12:06:10.820174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.168 [2024-11-18 12:06:10.820209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.168 qpair failed and we were unable to recover it. 00:37:45.168 [2024-11-18 12:06:10.820325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.168 [2024-11-18 12:06:10.820359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.168 qpair failed and we were unable to recover it. 00:37:45.168 [2024-11-18 12:06:10.820505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.168 [2024-11-18 12:06:10.820540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.168 qpair failed and we were unable to recover it. 00:37:45.168 [2024-11-18 12:06:10.820675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.168 [2024-11-18 12:06:10.820709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.168 qpair failed and we were unable to recover it. 00:37:45.168 [2024-11-18 12:06:10.820857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.168 [2024-11-18 12:06:10.820890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.168 qpair failed and we were unable to recover it. 00:37:45.168 [2024-11-18 12:06:10.821000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.168 [2024-11-18 12:06:10.821034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.168 qpair failed and we were unable to recover it. 00:37:45.168 [2024-11-18 12:06:10.821188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.169 [2024-11-18 12:06:10.821225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.169 qpair failed and we were unable to recover it. 00:37:45.169 [2024-11-18 12:06:10.821368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.169 [2024-11-18 12:06:10.821405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.169 qpair failed and we were unable to recover it. 00:37:45.169 [2024-11-18 12:06:10.821570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.169 [2024-11-18 12:06:10.821619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.169 qpair failed and we were unable to recover it. 00:37:45.169 [2024-11-18 12:06:10.821807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.169 [2024-11-18 12:06:10.821860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.169 qpair failed and we were unable to recover it. 00:37:45.169 [2024-11-18 12:06:10.822010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.169 [2024-11-18 12:06:10.822063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.169 qpair failed and we were unable to recover it. 00:37:45.169 [2024-11-18 12:06:10.822233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.169 [2024-11-18 12:06:10.822271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.169 qpair failed and we were unable to recover it. 00:37:45.169 [2024-11-18 12:06:10.822391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.169 [2024-11-18 12:06:10.822428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.169 qpair failed and we were unable to recover it. 00:37:45.169 [2024-11-18 12:06:10.822592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.169 [2024-11-18 12:06:10.822630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.169 qpair failed and we were unable to recover it. 00:37:45.169 [2024-11-18 12:06:10.822778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.169 [2024-11-18 12:06:10.822815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.169 qpair failed and we were unable to recover it. 00:37:45.169 [2024-11-18 12:06:10.822997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.169 [2024-11-18 12:06:10.823035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.169 qpair failed and we were unable to recover it. 00:37:45.169 [2024-11-18 12:06:10.823147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.169 [2024-11-18 12:06:10.823184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.169 qpair failed and we were unable to recover it. 00:37:45.169 [2024-11-18 12:06:10.823304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.169 [2024-11-18 12:06:10.823354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.169 qpair failed and we were unable to recover it. 00:37:45.169 [2024-11-18 12:06:10.823509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.169 [2024-11-18 12:06:10.823543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.169 qpair failed and we were unable to recover it. 00:37:45.169 [2024-11-18 12:06:10.823686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.169 [2024-11-18 12:06:10.823721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.169 qpair failed and we were unable to recover it. 00:37:45.169 [2024-11-18 12:06:10.823870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.169 [2024-11-18 12:06:10.823922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.169 qpair failed and we were unable to recover it. 00:37:45.169 [2024-11-18 12:06:10.824073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.169 [2024-11-18 12:06:10.824128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.169 qpair failed and we were unable to recover it. 00:37:45.169 [2024-11-18 12:06:10.824236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.169 [2024-11-18 12:06:10.824270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.169 qpair failed and we were unable to recover it. 00:37:45.169 [2024-11-18 12:06:10.824426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.169 [2024-11-18 12:06:10.824481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.169 qpair failed and we were unable to recover it. 00:37:45.169 [2024-11-18 12:06:10.824625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.169 [2024-11-18 12:06:10.824673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.169 qpair failed and we were unable to recover it. 00:37:45.169 [2024-11-18 12:06:10.824788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.169 [2024-11-18 12:06:10.824823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.169 qpair failed and we were unable to recover it. 00:37:45.169 [2024-11-18 12:06:10.824981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.169 [2024-11-18 12:06:10.825018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.169 qpair failed and we were unable to recover it. 00:37:45.169 [2024-11-18 12:06:10.825153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.169 [2024-11-18 12:06:10.825191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.169 qpair failed and we were unable to recover it. 00:37:45.169 [2024-11-18 12:06:10.825329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.169 [2024-11-18 12:06:10.825372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.169 qpair failed and we were unable to recover it. 00:37:45.169 [2024-11-18 12:06:10.825588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.169 [2024-11-18 12:06:10.825622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.169 qpair failed and we were unable to recover it. 00:37:45.169 [2024-11-18 12:06:10.825752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.169 [2024-11-18 12:06:10.825811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.169 qpair failed and we were unable to recover it. 00:37:45.169 [2024-11-18 12:06:10.825964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.169 [2024-11-18 12:06:10.826003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.169 qpair failed and we were unable to recover it. 00:37:45.169 [2024-11-18 12:06:10.826145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.169 [2024-11-18 12:06:10.826183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.169 qpair failed and we were unable to recover it. 00:37:45.169 [2024-11-18 12:06:10.826316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.169 [2024-11-18 12:06:10.826349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.169 qpair failed and we were unable to recover it. 00:37:45.169 [2024-11-18 12:06:10.826486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.169 [2024-11-18 12:06:10.826531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.169 qpair failed and we were unable to recover it. 00:37:45.169 [2024-11-18 12:06:10.826665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.169 [2024-11-18 12:06:10.826699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.169 qpair failed and we were unable to recover it. 00:37:45.169 [2024-11-18 12:06:10.826877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.169 [2024-11-18 12:06:10.826915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.169 qpair failed and we were unable to recover it. 00:37:45.169 [2024-11-18 12:06:10.827071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.169 [2024-11-18 12:06:10.827109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.169 qpair failed and we were unable to recover it. 00:37:45.169 [2024-11-18 12:06:10.827279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.169 [2024-11-18 12:06:10.827316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.169 qpair failed and we were unable to recover it. 00:37:45.169 [2024-11-18 12:06:10.827437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.169 [2024-11-18 12:06:10.827485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.169 qpair failed and we were unable to recover it. 00:37:45.169 [2024-11-18 12:06:10.827626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.169 [2024-11-18 12:06:10.827660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.169 qpair failed and we were unable to recover it. 00:37:45.169 [2024-11-18 12:06:10.827766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.169 [2024-11-18 12:06:10.827800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.169 qpair failed and we were unable to recover it. 00:37:45.169 [2024-11-18 12:06:10.827983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.169 [2024-11-18 12:06:10.828021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.169 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.828157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.170 [2024-11-18 12:06:10.828195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.170 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.828305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.170 [2024-11-18 12:06:10.828342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.170 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.828541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.170 [2024-11-18 12:06:10.828575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.170 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.828678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.170 [2024-11-18 12:06:10.828712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.170 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.828850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.170 [2024-11-18 12:06:10.828882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.170 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.828995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.170 [2024-11-18 12:06:10.829045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.170 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.829223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.170 [2024-11-18 12:06:10.829260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.170 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.829381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.170 [2024-11-18 12:06:10.829414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.170 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.829577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.170 [2024-11-18 12:06:10.829625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.170 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.829755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.170 [2024-11-18 12:06:10.829811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.170 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.830011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.170 [2024-11-18 12:06:10.830049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.170 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.830225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.170 [2024-11-18 12:06:10.830262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.170 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.830426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.170 [2024-11-18 12:06:10.830487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.170 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.830646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.170 [2024-11-18 12:06:10.830694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.170 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.830840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.170 [2024-11-18 12:06:10.830884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.170 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.831072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.170 [2024-11-18 12:06:10.831129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.170 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.831281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.170 [2024-11-18 12:06:10.831319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.170 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.831448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.170 [2024-11-18 12:06:10.831483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.170 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.831667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.170 [2024-11-18 12:06:10.831701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.170 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.831882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.170 [2024-11-18 12:06:10.831935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.170 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.832082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.170 [2024-11-18 12:06:10.832140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.170 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.832262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.170 [2024-11-18 12:06:10.832302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.170 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.832500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.170 [2024-11-18 12:06:10.832535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.170 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.832673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.170 [2024-11-18 12:06:10.832707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.170 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.832835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.170 [2024-11-18 12:06:10.832883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.170 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.833031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.170 [2024-11-18 12:06:10.833072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.170 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.833263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.170 [2024-11-18 12:06:10.833325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.170 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.833509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.170 [2024-11-18 12:06:10.833563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.170 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.833675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.170 [2024-11-18 12:06:10.833709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.170 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.833920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.170 [2024-11-18 12:06:10.833986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.170 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.834225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.170 [2024-11-18 12:06:10.834281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.170 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.834437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.170 [2024-11-18 12:06:10.834472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.170 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.834597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.170 [2024-11-18 12:06:10.834631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.170 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.834810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.170 [2024-11-18 12:06:10.834862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.170 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.835017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.170 [2024-11-18 12:06:10.835071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.170 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.835225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.170 [2024-11-18 12:06:10.835280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.170 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.835427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.170 [2024-11-18 12:06:10.835461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.170 qpair failed and we were unable to recover it. 00:37:45.170 [2024-11-18 12:06:10.835623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.835670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.835819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.835855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.835993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.836027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.836185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.836219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.836331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.836365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.836506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.836541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.836676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.836710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.836921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.836962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.837128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.837162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.837328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.837365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.837485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.837554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.837708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.837757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.837942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.838008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.838146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.838184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.838356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.838393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.838570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.838609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.838753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.838786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.838942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.839005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.839159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.839197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.839352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.839389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.839531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.839595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.839753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.839808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.839942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.839981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.840178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.840239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.840412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.840450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.840640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.840681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.840842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.840881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.841055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.841092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.841264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.841302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.841436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.841470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.841642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.841690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.841830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.841865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.841992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.842029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.842220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.842280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.842464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.842515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.842696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.842734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.842918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.842987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.843237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.171 [2024-11-18 12:06:10.843295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.171 qpair failed and we were unable to recover it. 00:37:45.171 [2024-11-18 12:06:10.843484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.172 [2024-11-18 12:06:10.843524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.172 qpair failed and we were unable to recover it. 00:37:45.172 [2024-11-18 12:06:10.843677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.172 [2024-11-18 12:06:10.843711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.172 qpair failed and we were unable to recover it. 00:37:45.172 [2024-11-18 12:06:10.843861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.172 [2024-11-18 12:06:10.843910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.172 qpair failed and we were unable to recover it. 00:37:45.172 [2024-11-18 12:06:10.844062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.172 [2024-11-18 12:06:10.844100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.172 qpair failed and we were unable to recover it. 00:37:45.172 [2024-11-18 12:06:10.844247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.172 [2024-11-18 12:06:10.844284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.172 qpair failed and we were unable to recover it. 00:37:45.172 [2024-11-18 12:06:10.844451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.172 [2024-11-18 12:06:10.844544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.172 qpair failed and we were unable to recover it. 00:37:45.172 [2024-11-18 12:06:10.844667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.172 [2024-11-18 12:06:10.844703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.172 qpair failed and we were unable to recover it. 00:37:45.172 [2024-11-18 12:06:10.844889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.172 [2024-11-18 12:06:10.844945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.172 qpair failed and we were unable to recover it. 00:37:45.172 [2024-11-18 12:06:10.845054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.172 [2024-11-18 12:06:10.845088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.172 qpair failed and we were unable to recover it. 00:37:45.172 [2024-11-18 12:06:10.845263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.172 [2024-11-18 12:06:10.845321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.172 qpair failed and we were unable to recover it. 00:37:45.172 [2024-11-18 12:06:10.845462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.172 [2024-11-18 12:06:10.845509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.172 qpair failed and we were unable to recover it. 00:37:45.172 [2024-11-18 12:06:10.845657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.172 [2024-11-18 12:06:10.845712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.172 qpair failed and we were unable to recover it. 00:37:45.172 [2024-11-18 12:06:10.845868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.172 [2024-11-18 12:06:10.845916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.172 qpair failed and we were unable to recover it. 00:37:45.172 [2024-11-18 12:06:10.846034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.172 [2024-11-18 12:06:10.846071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.172 qpair failed and we were unable to recover it. 00:37:45.172 [2024-11-18 12:06:10.846256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.172 [2024-11-18 12:06:10.846292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.172 qpair failed and we were unable to recover it. 00:37:45.172 [2024-11-18 12:06:10.846454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.172 [2024-11-18 12:06:10.846512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.172 qpair failed and we were unable to recover it. 00:37:45.172 [2024-11-18 12:06:10.846674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.172 [2024-11-18 12:06:10.846722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.172 qpair failed and we were unable to recover it. 00:37:45.172 [2024-11-18 12:06:10.846948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.172 [2024-11-18 12:06:10.847012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.172 qpair failed and we were unable to recover it. 00:37:45.172 [2024-11-18 12:06:10.847146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.172 [2024-11-18 12:06:10.847199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.172 qpair failed and we were unable to recover it. 00:37:45.172 [2024-11-18 12:06:10.847346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.172 [2024-11-18 12:06:10.847396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.172 qpair failed and we were unable to recover it. 00:37:45.172 [2024-11-18 12:06:10.847562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.172 [2024-11-18 12:06:10.847610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.172 qpair failed and we were unable to recover it. 00:37:45.172 [2024-11-18 12:06:10.847737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.172 [2024-11-18 12:06:10.847784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.172 qpair failed and we were unable to recover it. 00:37:45.172 [2024-11-18 12:06:10.847926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.172 [2024-11-18 12:06:10.847993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.172 qpair failed and we were unable to recover it. 00:37:45.172 [2024-11-18 12:06:10.848226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.172 [2024-11-18 12:06:10.848282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.172 qpair failed and we were unable to recover it. 00:37:45.172 [2024-11-18 12:06:10.848462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.172 [2024-11-18 12:06:10.848514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.172 qpair failed and we were unable to recover it. 00:37:45.172 [2024-11-18 12:06:10.848646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.172 [2024-11-18 12:06:10.848700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.172 qpair failed and we were unable to recover it. 00:37:45.172 [2024-11-18 12:06:10.848914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.172 [2024-11-18 12:06:10.848971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.172 qpair failed and we were unable to recover it. 00:37:45.172 [2024-11-18 12:06:10.849229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.172 [2024-11-18 12:06:10.849330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.172 qpair failed and we were unable to recover it. 00:37:45.172 [2024-11-18 12:06:10.849484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.172 [2024-11-18 12:06:10.849529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.172 qpair failed and we were unable to recover it. 00:37:45.172 [2024-11-18 12:06:10.849657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.172 [2024-11-18 12:06:10.849710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.172 qpair failed and we were unable to recover it. 00:37:45.172 [2024-11-18 12:06:10.849908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.172 [2024-11-18 12:06:10.849960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.172 qpair failed and we were unable to recover it. 00:37:45.172 [2024-11-18 12:06:10.850124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.172 [2024-11-18 12:06:10.850176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.172 qpair failed and we were unable to recover it. 00:37:45.172 [2024-11-18 12:06:10.850313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.172 [2024-11-18 12:06:10.850347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.172 qpair failed and we were unable to recover it. 00:37:45.172 [2024-11-18 12:06:10.850481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.172 [2024-11-18 12:06:10.850524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.172 qpair failed and we were unable to recover it. 00:37:45.172 [2024-11-18 12:06:10.850632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.172 [2024-11-18 12:06:10.850664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.173 qpair failed and we were unable to recover it. 00:37:45.173 [2024-11-18 12:06:10.850788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.173 [2024-11-18 12:06:10.850824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.173 qpair failed and we were unable to recover it. 00:37:45.173 [2024-11-18 12:06:10.850971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.173 [2024-11-18 12:06:10.851009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.173 qpair failed and we were unable to recover it. 00:37:45.173 [2024-11-18 12:06:10.851154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.173 [2024-11-18 12:06:10.851190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.173 qpair failed and we were unable to recover it. 00:37:45.173 [2024-11-18 12:06:10.851305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.173 [2024-11-18 12:06:10.851342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.173 qpair failed and we were unable to recover it. 00:37:45.173 [2024-11-18 12:06:10.851485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.173 [2024-11-18 12:06:10.851546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.173 qpair failed and we were unable to recover it. 00:37:45.173 [2024-11-18 12:06:10.851661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.173 [2024-11-18 12:06:10.851709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.173 qpair failed and we were unable to recover it. 00:37:45.173 [2024-11-18 12:06:10.851886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.173 [2024-11-18 12:06:10.851941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.173 qpair failed and we were unable to recover it. 00:37:45.173 [2024-11-18 12:06:10.852086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.173 [2024-11-18 12:06:10.852176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.173 qpair failed and we were unable to recover it. 00:37:45.173 [2024-11-18 12:06:10.852372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.173 [2024-11-18 12:06:10.852407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.173 qpair failed and we were unable to recover it. 00:37:45.173 [2024-11-18 12:06:10.852587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.173 [2024-11-18 12:06:10.852641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.173 qpair failed and we were unable to recover it. 00:37:45.173 [2024-11-18 12:06:10.852794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.173 [2024-11-18 12:06:10.852851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.173 qpair failed and we were unable to recover it. 00:37:45.173 [2024-11-18 12:06:10.852988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.173 [2024-11-18 12:06:10.853023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.173 qpair failed and we were unable to recover it. 00:37:45.173 [2024-11-18 12:06:10.853160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.173 [2024-11-18 12:06:10.853194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.173 qpair failed and we were unable to recover it. 00:37:45.173 [2024-11-18 12:06:10.853340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.173 [2024-11-18 12:06:10.853375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.173 qpair failed and we were unable to recover it. 00:37:45.173 [2024-11-18 12:06:10.853524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.173 [2024-11-18 12:06:10.853559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.173 qpair failed and we were unable to recover it. 00:37:45.173 [2024-11-18 12:06:10.853706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.173 [2024-11-18 12:06:10.853742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.173 qpair failed and we were unable to recover it. 00:37:45.173 [2024-11-18 12:06:10.853887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.173 [2024-11-18 12:06:10.853954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.173 qpair failed and we were unable to recover it. 00:37:45.173 [2024-11-18 12:06:10.854113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.173 [2024-11-18 12:06:10.854171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.173 qpair failed and we were unable to recover it. 00:37:45.173 [2024-11-18 12:06:10.854301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.173 [2024-11-18 12:06:10.854338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.173 qpair failed and we were unable to recover it. 00:37:45.173 [2024-11-18 12:06:10.854456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.173 [2024-11-18 12:06:10.854502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.173 qpair failed and we were unable to recover it. 00:37:45.173 [2024-11-18 12:06:10.854678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.173 [2024-11-18 12:06:10.854715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.173 qpair failed and we were unable to recover it. 00:37:45.173 [2024-11-18 12:06:10.854871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.173 [2024-11-18 12:06:10.854909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.173 qpair failed and we were unable to recover it. 00:37:45.173 [2024-11-18 12:06:10.855049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.173 [2024-11-18 12:06:10.855093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.173 qpair failed and we were unable to recover it. 00:37:45.173 [2024-11-18 12:06:10.855301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.173 [2024-11-18 12:06:10.855356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.173 qpair failed and we were unable to recover it. 00:37:45.173 [2024-11-18 12:06:10.855494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.173 [2024-11-18 12:06:10.855529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.173 qpair failed and we were unable to recover it. 00:37:45.173 [2024-11-18 12:06:10.855652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.173 [2024-11-18 12:06:10.855705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.173 qpair failed and we were unable to recover it. 00:37:45.173 [2024-11-18 12:06:10.855888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.173 [2024-11-18 12:06:10.855939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.173 qpair failed and we were unable to recover it. 00:37:45.173 [2024-11-18 12:06:10.856068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.173 [2024-11-18 12:06:10.856135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.173 qpair failed and we were unable to recover it. 00:37:45.173 [2024-11-18 12:06:10.856269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.173 [2024-11-18 12:06:10.856303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.173 qpair failed and we were unable to recover it. 00:37:45.173 [2024-11-18 12:06:10.856431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.173 [2024-11-18 12:06:10.856465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.173 qpair failed and we were unable to recover it. 00:37:45.173 [2024-11-18 12:06:10.856640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.173 [2024-11-18 12:06:10.856674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.173 qpair failed and we were unable to recover it. 00:37:45.173 [2024-11-18 12:06:10.856791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.173 [2024-11-18 12:06:10.856825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.173 qpair failed and we were unable to recover it. 00:37:45.173 [2024-11-18 12:06:10.857014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.173 [2024-11-18 12:06:10.857075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.173 qpair failed and we were unable to recover it. 00:37:45.173 [2024-11-18 12:06:10.857269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.173 [2024-11-18 12:06:10.857328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.173 qpair failed and we were unable to recover it. 00:37:45.173 [2024-11-18 12:06:10.857464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.173 [2024-11-18 12:06:10.857512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.173 qpair failed and we were unable to recover it. 00:37:45.173 [2024-11-18 12:06:10.857660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.857714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.174 [2024-11-18 12:06:10.857951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.858005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.174 [2024-11-18 12:06:10.858215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.858272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.174 [2024-11-18 12:06:10.858432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.858465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.174 [2024-11-18 12:06:10.858641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.858694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.174 [2024-11-18 12:06:10.858868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.858921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.174 [2024-11-18 12:06:10.859076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.859143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.174 [2024-11-18 12:06:10.859392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.859453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.174 [2024-11-18 12:06:10.859608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.859644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.174 [2024-11-18 12:06:10.859782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.859859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.174 [2024-11-18 12:06:10.860083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.860149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.174 [2024-11-18 12:06:10.860405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.860462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.174 [2024-11-18 12:06:10.860604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.860638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.174 [2024-11-18 12:06:10.860749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.860801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.174 [2024-11-18 12:06:10.860928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.860965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.174 [2024-11-18 12:06:10.861138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.861175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.174 [2024-11-18 12:06:10.861295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.861333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.174 [2024-11-18 12:06:10.861479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.861543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.174 [2024-11-18 12:06:10.861676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.861725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.174 [2024-11-18 12:06:10.861894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.861951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.174 [2024-11-18 12:06:10.862110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.862164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.174 [2024-11-18 12:06:10.862312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.862357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.174 [2024-11-18 12:06:10.862477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.862522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.174 [2024-11-18 12:06:10.862686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.862721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.174 [2024-11-18 12:06:10.862858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.862896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.174 [2024-11-18 12:06:10.863032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.863069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.174 [2024-11-18 12:06:10.863264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.863302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.174 [2024-11-18 12:06:10.863443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.863486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.174 [2024-11-18 12:06:10.863649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.863691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.174 [2024-11-18 12:06:10.863844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.863880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.174 [2024-11-18 12:06:10.864009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.864067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.174 [2024-11-18 12:06:10.864279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.864334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.174 [2024-11-18 12:06:10.864502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.864537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.174 [2024-11-18 12:06:10.864690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.864743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.174 [2024-11-18 12:06:10.864867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.864912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.174 [2024-11-18 12:06:10.865071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.865136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.174 [2024-11-18 12:06:10.865289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.865336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.174 [2024-11-18 12:06:10.865452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.174 [2024-11-18 12:06:10.865503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.174 qpair failed and we were unable to recover it. 00:37:45.175 [2024-11-18 12:06:10.865696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.175 [2024-11-18 12:06:10.865750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.175 qpair failed and we were unable to recover it. 00:37:45.175 [2024-11-18 12:06:10.865886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.175 [2024-11-18 12:06:10.865932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.175 qpair failed and we were unable to recover it. 00:37:45.175 [2024-11-18 12:06:10.866169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.175 [2024-11-18 12:06:10.866227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.175 qpair failed and we were unable to recover it. 00:37:45.175 [2024-11-18 12:06:10.866389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.175 [2024-11-18 12:06:10.866428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.175 qpair failed and we were unable to recover it. 00:37:45.175 [2024-11-18 12:06:10.866592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.175 [2024-11-18 12:06:10.866629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.175 qpair failed and we were unable to recover it. 00:37:45.175 [2024-11-18 12:06:10.866848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.175 [2024-11-18 12:06:10.866902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.175 qpair failed and we were unable to recover it. 00:37:45.175 [2024-11-18 12:06:10.867044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.175 [2024-11-18 12:06:10.867097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.175 qpair failed and we were unable to recover it. 00:37:45.175 [2024-11-18 12:06:10.867318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.175 [2024-11-18 12:06:10.867376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.175 qpair failed and we were unable to recover it. 00:37:45.175 [2024-11-18 12:06:10.867520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.175 [2024-11-18 12:06:10.867556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.175 qpair failed and we were unable to recover it. 00:37:45.175 [2024-11-18 12:06:10.867690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.175 [2024-11-18 12:06:10.867723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.175 qpair failed and we were unable to recover it. 00:37:45.175 [2024-11-18 12:06:10.867885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.175 [2024-11-18 12:06:10.867923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.175 qpair failed and we were unable to recover it. 00:37:45.175 [2024-11-18 12:06:10.868070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.175 [2024-11-18 12:06:10.868135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.175 qpair failed and we were unable to recover it. 00:37:45.175 [2024-11-18 12:06:10.868263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.175 [2024-11-18 12:06:10.868297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.175 qpair failed and we were unable to recover it. 00:37:45.175 [2024-11-18 12:06:10.868455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.175 [2024-11-18 12:06:10.868501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.175 qpair failed and we were unable to recover it. 00:37:45.175 [2024-11-18 12:06:10.868641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.175 [2024-11-18 12:06:10.868677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.175 qpair failed and we were unable to recover it. 00:37:45.175 [2024-11-18 12:06:10.868881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.175 [2024-11-18 12:06:10.868934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.175 qpair failed and we were unable to recover it. 00:37:45.175 [2024-11-18 12:06:10.869063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.175 [2024-11-18 12:06:10.869112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.175 qpair failed and we were unable to recover it. 00:37:45.175 [2024-11-18 12:06:10.869263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.175 [2024-11-18 12:06:10.869303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.175 qpair failed and we were unable to recover it. 00:37:45.175 [2024-11-18 12:06:10.869452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.175 [2024-11-18 12:06:10.869486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.175 qpair failed and we were unable to recover it. 00:37:45.175 [2024-11-18 12:06:10.869598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.175 [2024-11-18 12:06:10.869632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.175 qpair failed and we were unable to recover it. 00:37:45.175 [2024-11-18 12:06:10.869779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.175 [2024-11-18 12:06:10.869817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.175 qpair failed and we were unable to recover it. 00:37:45.175 [2024-11-18 12:06:10.869990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.175 [2024-11-18 12:06:10.870028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.175 qpair failed and we were unable to recover it. 00:37:45.175 [2024-11-18 12:06:10.870151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.175 [2024-11-18 12:06:10.870202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.175 qpair failed and we were unable to recover it. 00:37:45.175 [2024-11-18 12:06:10.870351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.175 [2024-11-18 12:06:10.870390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.175 qpair failed and we were unable to recover it. 00:37:45.175 [2024-11-18 12:06:10.870555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.175 [2024-11-18 12:06:10.870592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.175 qpair failed and we were unable to recover it. 00:37:45.175 [2024-11-18 12:06:10.870766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.175 [2024-11-18 12:06:10.870815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.175 qpair failed and we were unable to recover it. 00:37:45.175 [2024-11-18 12:06:10.871038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.175 [2024-11-18 12:06:10.871086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.175 qpair failed and we were unable to recover it. 00:37:45.175 [2024-11-18 12:06:10.871243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.175 [2024-11-18 12:06:10.871301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.175 qpair failed and we were unable to recover it. 00:37:45.175 [2024-11-18 12:06:10.871431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.175 [2024-11-18 12:06:10.871466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.175 qpair failed and we were unable to recover it. 00:37:45.175 [2024-11-18 12:06:10.871625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.175 [2024-11-18 12:06:10.871665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.175 qpair failed and we were unable to recover it. 00:37:45.175 [2024-11-18 12:06:10.871777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.175 [2024-11-18 12:06:10.871816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.175 qpair failed and we were unable to recover it. 00:37:45.175 [2024-11-18 12:06:10.872011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.175 [2024-11-18 12:06:10.872070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.175 qpair failed and we were unable to recover it. 00:37:45.175 [2024-11-18 12:06:10.872259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.175 [2024-11-18 12:06:10.872319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.175 qpair failed and we were unable to recover it. 00:37:45.175 [2024-11-18 12:06:10.872512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.175 [2024-11-18 12:06:10.872579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.175 qpair failed and we were unable to recover it. 00:37:45.175 [2024-11-18 12:06:10.872699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.175 [2024-11-18 12:06:10.872736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.175 qpair failed and we were unable to recover it. 00:37:45.175 [2024-11-18 12:06:10.872856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.175 [2024-11-18 12:06:10.872910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.175 qpair failed and we were unable to recover it. 00:37:45.175 [2024-11-18 12:06:10.873081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.175 [2024-11-18 12:06:10.873139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.873279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.176 [2024-11-18 12:06:10.873313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.873454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.176 [2024-11-18 12:06:10.873509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.873677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.176 [2024-11-18 12:06:10.873726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.873916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.176 [2024-11-18 12:06:10.873953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.874143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.176 [2024-11-18 12:06:10.874201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.874345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.176 [2024-11-18 12:06:10.874381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.874528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.176 [2024-11-18 12:06:10.874564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.874722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.176 [2024-11-18 12:06:10.874761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.874914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.176 [2024-11-18 12:06:10.874953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.875097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.176 [2024-11-18 12:06:10.875134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.875359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.176 [2024-11-18 12:06:10.875398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.875575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.176 [2024-11-18 12:06:10.875610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.875724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.176 [2024-11-18 12:06:10.875758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.875920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.176 [2024-11-18 12:06:10.875958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.876146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.176 [2024-11-18 12:06:10.876184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.876305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.176 [2024-11-18 12:06:10.876343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.876519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.176 [2024-11-18 12:06:10.876556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.876687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.176 [2024-11-18 12:06:10.876742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.876897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.176 [2024-11-18 12:06:10.876938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.877115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.176 [2024-11-18 12:06:10.877154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.877357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.176 [2024-11-18 12:06:10.877395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.877567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.176 [2024-11-18 12:06:10.877616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.877733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.176 [2024-11-18 12:06:10.877768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.877883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.176 [2024-11-18 12:06:10.877921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.878111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.176 [2024-11-18 12:06:10.878171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.878359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.176 [2024-11-18 12:06:10.878422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.878606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.176 [2024-11-18 12:06:10.878654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.878775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.176 [2024-11-18 12:06:10.878811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.878923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.176 [2024-11-18 12:06:10.878957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.879155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.176 [2024-11-18 12:06:10.879220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.879365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.176 [2024-11-18 12:06:10.879402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.879562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.176 [2024-11-18 12:06:10.879596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.879731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.176 [2024-11-18 12:06:10.879770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.879982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.176 [2024-11-18 12:06:10.880020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.880166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.176 [2024-11-18 12:06:10.880206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.880387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.176 [2024-11-18 12:06:10.880435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.880602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.176 [2024-11-18 12:06:10.880652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.176 qpair failed and we were unable to recover it. 00:37:45.176 [2024-11-18 12:06:10.880782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.177 [2024-11-18 12:06:10.880822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.177 qpair failed and we were unable to recover it. 00:37:45.177 [2024-11-18 12:06:10.881002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.177 [2024-11-18 12:06:10.881059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.177 qpair failed and we were unable to recover it. 00:37:45.177 [2024-11-18 12:06:10.881179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.177 [2024-11-18 12:06:10.881213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.177 qpair failed and we were unable to recover it. 00:37:45.177 [2024-11-18 12:06:10.881352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.177 [2024-11-18 12:06:10.881386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.177 qpair failed and we were unable to recover it. 00:37:45.177 [2024-11-18 12:06:10.881537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.177 [2024-11-18 12:06:10.881585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.177 qpair failed and we were unable to recover it. 00:37:45.177 [2024-11-18 12:06:10.881722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.177 [2024-11-18 12:06:10.881765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.177 qpair failed and we were unable to recover it. 00:37:45.177 [2024-11-18 12:06:10.881912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.177 [2024-11-18 12:06:10.881953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.177 qpair failed and we were unable to recover it. 00:37:45.177 [2024-11-18 12:06:10.882101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.177 [2024-11-18 12:06:10.882160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.177 qpair failed and we were unable to recover it. 00:37:45.177 [2024-11-18 12:06:10.882277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.177 [2024-11-18 12:06:10.882316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.177 qpair failed and we were unable to recover it. 00:37:45.177 [2024-11-18 12:06:10.882457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.177 [2024-11-18 12:06:10.882502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.177 qpair failed and we were unable to recover it. 00:37:45.177 [2024-11-18 12:06:10.882622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.177 [2024-11-18 12:06:10.882657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.177 qpair failed and we were unable to recover it. 00:37:45.177 [2024-11-18 12:06:10.882793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.177 [2024-11-18 12:06:10.882828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.177 qpair failed and we were unable to recover it. 00:37:45.177 [2024-11-18 12:06:10.882983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.177 [2024-11-18 12:06:10.883022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.177 qpair failed and we were unable to recover it. 00:37:45.177 [2024-11-18 12:06:10.883175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.177 [2024-11-18 12:06:10.883214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.177 qpair failed and we were unable to recover it. 00:37:45.177 [2024-11-18 12:06:10.883356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.177 [2024-11-18 12:06:10.883406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.177 qpair failed and we were unable to recover it. 00:37:45.177 [2024-11-18 12:06:10.883559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.177 [2024-11-18 12:06:10.883596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.177 qpair failed and we were unable to recover it. 00:37:45.177 [2024-11-18 12:06:10.883715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.177 [2024-11-18 12:06:10.883783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.177 qpair failed and we were unable to recover it. 00:37:45.177 [2024-11-18 12:06:10.883963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.177 [2024-11-18 12:06:10.884017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.177 qpair failed and we were unable to recover it. 00:37:45.177 [2024-11-18 12:06:10.884203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.177 [2024-11-18 12:06:10.884258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.177 qpair failed and we were unable to recover it. 00:37:45.177 [2024-11-18 12:06:10.884379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.177 [2024-11-18 12:06:10.884414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.177 qpair failed and we were unable to recover it. 00:37:45.177 [2024-11-18 12:06:10.884563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.177 [2024-11-18 12:06:10.884599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.177 qpair failed and we were unable to recover it. 00:37:45.177 [2024-11-18 12:06:10.884718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.177 [2024-11-18 12:06:10.884773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.177 qpair failed and we were unable to recover it. 00:37:45.177 [2024-11-18 12:06:10.884929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.177 [2024-11-18 12:06:10.884964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.177 qpair failed and we were unable to recover it. 00:37:45.177 [2024-11-18 12:06:10.885156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.177 [2024-11-18 12:06:10.885215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.177 qpair failed and we were unable to recover it. 00:37:45.177 [2024-11-18 12:06:10.885331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.177 [2024-11-18 12:06:10.885366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.177 qpair failed and we were unable to recover it. 00:37:45.177 [2024-11-18 12:06:10.885503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.177 [2024-11-18 12:06:10.885538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.177 qpair failed and we were unable to recover it. 00:37:45.177 [2024-11-18 12:06:10.885658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.177 [2024-11-18 12:06:10.885697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.177 qpair failed and we were unable to recover it. 00:37:45.177 [2024-11-18 12:06:10.885839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.177 [2024-11-18 12:06:10.885887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.177 qpair failed and we were unable to recover it. 00:37:45.177 [2024-11-18 12:06:10.886034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.177 [2024-11-18 12:06:10.886071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.177 qpair failed and we were unable to recover it. 00:37:45.177 [2024-11-18 12:06:10.886216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.177 [2024-11-18 12:06:10.886251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.177 qpair failed and we were unable to recover it. 00:37:45.177 [2024-11-18 12:06:10.886365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.177 [2024-11-18 12:06:10.886398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.177 qpair failed and we were unable to recover it. 00:37:45.177 [2024-11-18 12:06:10.886537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.177 [2024-11-18 12:06:10.886573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.177 qpair failed and we were unable to recover it. 00:37:45.177 [2024-11-18 12:06:10.886699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.177 [2024-11-18 12:06:10.886737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.177 qpair failed and we were unable to recover it. 00:37:45.177 [2024-11-18 12:06:10.886886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.886923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.178 [2024-11-18 12:06:10.887067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.887105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.178 [2024-11-18 12:06:10.887276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.887336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.178 [2024-11-18 12:06:10.887450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.887487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.178 [2024-11-18 12:06:10.887622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.887671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.178 [2024-11-18 12:06:10.887818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.887856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.178 [2024-11-18 12:06:10.888029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.888086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.178 [2024-11-18 12:06:10.888236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.888274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.178 [2024-11-18 12:06:10.888417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.888454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.178 [2024-11-18 12:06:10.888626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.888663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.178 [2024-11-18 12:06:10.888827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.888880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.178 [2024-11-18 12:06:10.889031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.889084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.178 [2024-11-18 12:06:10.889291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.889348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.178 [2024-11-18 12:06:10.889457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.889503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.178 [2024-11-18 12:06:10.889642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.889694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.178 [2024-11-18 12:06:10.889908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.889964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.178 [2024-11-18 12:06:10.890172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.890232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.178 [2024-11-18 12:06:10.890384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.890422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.178 [2024-11-18 12:06:10.890570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.890605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.178 [2024-11-18 12:06:10.890758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.890813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.178 [2024-11-18 12:06:10.890955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.890990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.178 [2024-11-18 12:06:10.891102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.891136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.178 [2024-11-18 12:06:10.891316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.891364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.178 [2024-11-18 12:06:10.891506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.891544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.178 [2024-11-18 12:06:10.891680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.891716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.178 [2024-11-18 12:06:10.891883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.891922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.178 [2024-11-18 12:06:10.892070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.892107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.178 [2024-11-18 12:06:10.892270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.892324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.178 [2024-11-18 12:06:10.892469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.892511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.178 [2024-11-18 12:06:10.892681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.892730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.178 [2024-11-18 12:06:10.892873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.892911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.178 [2024-11-18 12:06:10.893046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.893099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.178 [2024-11-18 12:06:10.893256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.893294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.178 [2024-11-18 12:06:10.893453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.893488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.178 [2024-11-18 12:06:10.893659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.893707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.178 [2024-11-18 12:06:10.893875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.893916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.178 [2024-11-18 12:06:10.894025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.894064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.178 [2024-11-18 12:06:10.894238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.178 [2024-11-18 12:06:10.894295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.178 qpair failed and we were unable to recover it. 00:37:45.179 [2024-11-18 12:06:10.894433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.179 [2024-11-18 12:06:10.894467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.179 qpair failed and we were unable to recover it. 00:37:45.179 [2024-11-18 12:06:10.894619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.179 [2024-11-18 12:06:10.894653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.179 qpair failed and we were unable to recover it. 00:37:45.179 [2024-11-18 12:06:10.894816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.179 [2024-11-18 12:06:10.894854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.179 qpair failed and we were unable to recover it. 00:37:45.179 [2024-11-18 12:06:10.894994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.179 [2024-11-18 12:06:10.895047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.179 qpair failed and we were unable to recover it. 00:37:45.179 [2024-11-18 12:06:10.895198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.179 [2024-11-18 12:06:10.895242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.179 qpair failed and we were unable to recover it. 00:37:45.179 [2024-11-18 12:06:10.895406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.179 [2024-11-18 12:06:10.895442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.179 qpair failed and we were unable to recover it. 00:37:45.179 [2024-11-18 12:06:10.895598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.179 [2024-11-18 12:06:10.895646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.179 qpair failed and we were unable to recover it. 00:37:45.179 [2024-11-18 12:06:10.895854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.179 [2024-11-18 12:06:10.895903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.179 qpair failed and we were unable to recover it. 00:37:45.179 [2024-11-18 12:06:10.896013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.179 [2024-11-18 12:06:10.896050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.179 qpair failed and we were unable to recover it. 00:37:45.179 [2024-11-18 12:06:10.896204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.179 [2024-11-18 12:06:10.896243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.179 qpair failed and we were unable to recover it. 00:37:45.179 [2024-11-18 12:06:10.896365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.179 [2024-11-18 12:06:10.896411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.179 qpair failed and we were unable to recover it. 00:37:45.179 [2024-11-18 12:06:10.896603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.179 [2024-11-18 12:06:10.896651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.179 qpair failed and we were unable to recover it. 00:37:45.179 [2024-11-18 12:06:10.896810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.179 [2024-11-18 12:06:10.896854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.179 qpair failed and we were unable to recover it. 00:37:45.179 [2024-11-18 12:06:10.897025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.179 [2024-11-18 12:06:10.897085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.179 qpair failed and we were unable to recover it. 00:37:45.179 [2024-11-18 12:06:10.897255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.179 [2024-11-18 12:06:10.897311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.179 qpair failed and we were unable to recover it. 00:37:45.179 [2024-11-18 12:06:10.897460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.179 [2024-11-18 12:06:10.897522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.179 qpair failed and we were unable to recover it. 00:37:45.179 [2024-11-18 12:06:10.897645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.179 [2024-11-18 12:06:10.897678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.179 qpair failed and we were unable to recover it. 00:37:45.179 [2024-11-18 12:06:10.897840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.179 [2024-11-18 12:06:10.897874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.179 qpair failed and we were unable to recover it. 00:37:45.179 [2024-11-18 12:06:10.898042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.179 [2024-11-18 12:06:10.898079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.179 qpair failed and we were unable to recover it. 00:37:45.179 [2024-11-18 12:06:10.898211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.179 [2024-11-18 12:06:10.898264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.179 qpair failed and we were unable to recover it. 00:37:45.179 [2024-11-18 12:06:10.898424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.179 [2024-11-18 12:06:10.898464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.179 qpair failed and we were unable to recover it. 00:37:45.179 [2024-11-18 12:06:10.898665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.179 [2024-11-18 12:06:10.898713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.179 qpair failed and we were unable to recover it. 00:37:45.179 [2024-11-18 12:06:10.898865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.179 [2024-11-18 12:06:10.898901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.179 qpair failed and we were unable to recover it. 00:37:45.179 [2024-11-18 12:06:10.899010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.179 [2024-11-18 12:06:10.899065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.179 qpair failed and we were unable to recover it. 00:37:45.179 [2024-11-18 12:06:10.899248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.179 [2024-11-18 12:06:10.899286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.179 qpair failed and we were unable to recover it. 00:37:45.179 [2024-11-18 12:06:10.899418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.179 [2024-11-18 12:06:10.899452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.179 qpair failed and we were unable to recover it. 00:37:45.179 [2024-11-18 12:06:10.899627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.179 [2024-11-18 12:06:10.899663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.179 qpair failed and we were unable to recover it. 00:37:45.179 [2024-11-18 12:06:10.899813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.179 [2024-11-18 12:06:10.899867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.179 qpair failed and we were unable to recover it. 00:37:45.179 [2024-11-18 12:06:10.900030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.179 [2024-11-18 12:06:10.900067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.179 qpair failed and we were unable to recover it. 00:37:45.179 [2024-11-18 12:06:10.900209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.179 [2024-11-18 12:06:10.900245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.179 qpair failed and we were unable to recover it. 00:37:45.179 [2024-11-18 12:06:10.900360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.179 [2024-11-18 12:06:10.900396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.179 qpair failed and we were unable to recover it. 00:37:45.179 [2024-11-18 12:06:10.900544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.179 [2024-11-18 12:06:10.900593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.179 qpair failed and we were unable to recover it. 00:37:45.179 [2024-11-18 12:06:10.900735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.179 [2024-11-18 12:06:10.900793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.179 qpair failed and we were unable to recover it. 00:37:45.179 [2024-11-18 12:06:10.900952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.179 [2024-11-18 12:06:10.900991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.179 qpair failed and we were unable to recover it. 00:37:45.179 [2024-11-18 12:06:10.901165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.179 [2024-11-18 12:06:10.901202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.179 qpair failed and we were unable to recover it. 00:37:45.179 [2024-11-18 12:06:10.901400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.179 [2024-11-18 12:06:10.901437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.179 qpair failed and we were unable to recover it. 00:37:45.179 [2024-11-18 12:06:10.901590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.901625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.180 qpair failed and we were unable to recover it. 00:37:45.180 [2024-11-18 12:06:10.901771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.901814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.180 qpair failed and we were unable to recover it. 00:37:45.180 [2024-11-18 12:06:10.902045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.902099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.180 qpair failed and we were unable to recover it. 00:37:45.180 [2024-11-18 12:06:10.902212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.902249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.180 qpair failed and we were unable to recover it. 00:37:45.180 [2024-11-18 12:06:10.902374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.902414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.180 qpair failed and we were unable to recover it. 00:37:45.180 [2024-11-18 12:06:10.902597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.902645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.180 qpair failed and we were unable to recover it. 00:37:45.180 [2024-11-18 12:06:10.902841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.902889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.180 qpair failed and we were unable to recover it. 00:37:45.180 [2024-11-18 12:06:10.903124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.903190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.180 qpair failed and we were unable to recover it. 00:37:45.180 [2024-11-18 12:06:10.903377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.903421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.180 qpair failed and we were unable to recover it. 00:37:45.180 [2024-11-18 12:06:10.903569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.903603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.180 qpair failed and we were unable to recover it. 00:37:45.180 [2024-11-18 12:06:10.903707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.903743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.180 qpair failed and we were unable to recover it. 00:37:45.180 [2024-11-18 12:06:10.903953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.903993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.180 qpair failed and we were unable to recover it. 00:37:45.180 [2024-11-18 12:06:10.904179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.904268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.180 qpair failed and we were unable to recover it. 00:37:45.180 [2024-11-18 12:06:10.904428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.904467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.180 qpair failed and we were unable to recover it. 00:37:45.180 [2024-11-18 12:06:10.904616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.904650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.180 qpair failed and we were unable to recover it. 00:37:45.180 [2024-11-18 12:06:10.904811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.904852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.180 qpair failed and we were unable to recover it. 00:37:45.180 [2024-11-18 12:06:10.905002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.905040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.180 qpair failed and we were unable to recover it. 00:37:45.180 [2024-11-18 12:06:10.905238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.905275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.180 qpair failed and we were unable to recover it. 00:37:45.180 [2024-11-18 12:06:10.905407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.905441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.180 qpair failed and we were unable to recover it. 00:37:45.180 [2024-11-18 12:06:10.905561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.905597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.180 qpair failed and we were unable to recover it. 00:37:45.180 [2024-11-18 12:06:10.905707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.905744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.180 qpair failed and we were unable to recover it. 00:37:45.180 [2024-11-18 12:06:10.905907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.905941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.180 qpair failed and we were unable to recover it. 00:37:45.180 [2024-11-18 12:06:10.906084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.906119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.180 qpair failed and we were unable to recover it. 00:37:45.180 [2024-11-18 12:06:10.906259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.906294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.180 qpair failed and we were unable to recover it. 00:37:45.180 [2024-11-18 12:06:10.906444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.906499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.180 qpair failed and we were unable to recover it. 00:37:45.180 [2024-11-18 12:06:10.906615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.906651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.180 qpair failed and we were unable to recover it. 00:37:45.180 [2024-11-18 12:06:10.906799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.906833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.180 qpair failed and we were unable to recover it. 00:37:45.180 [2024-11-18 12:06:10.906941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.906976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.180 qpair failed and we were unable to recover it. 00:37:45.180 [2024-11-18 12:06:10.907094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.907142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.180 qpair failed and we were unable to recover it. 00:37:45.180 [2024-11-18 12:06:10.907253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.907288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.180 qpair failed and we were unable to recover it. 00:37:45.180 [2024-11-18 12:06:10.907423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.907470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.180 qpair failed and we were unable to recover it. 00:37:45.180 [2024-11-18 12:06:10.907595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.907631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.180 qpair failed and we were unable to recover it. 00:37:45.180 [2024-11-18 12:06:10.907771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.907805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.180 qpair failed and we were unable to recover it. 00:37:45.180 [2024-11-18 12:06:10.907962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.908016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.180 qpair failed and we were unable to recover it. 00:37:45.180 [2024-11-18 12:06:10.908164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.908217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.180 qpair failed and we were unable to recover it. 00:37:45.180 [2024-11-18 12:06:10.908327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.908365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.180 qpair failed and we were unable to recover it. 00:37:45.180 [2024-11-18 12:06:10.908517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.908553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.180 qpair failed and we were unable to recover it. 00:37:45.180 [2024-11-18 12:06:10.908694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.180 [2024-11-18 12:06:10.908730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.908851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-18 12:06:10.908889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.909010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-18 12:06:10.909048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.909245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-18 12:06:10.909284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.909440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-18 12:06:10.909474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.909588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-18 12:06:10.909623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.909762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-18 12:06:10.909798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.909940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-18 12:06:10.909979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.910094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-18 12:06:10.910133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.910306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-18 12:06:10.910344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.910510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-18 12:06:10.910549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.910671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-18 12:06:10.910733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.910867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-18 12:06:10.910920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.911070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-18 12:06:10.911122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.911251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-18 12:06:10.911299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.911479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-18 12:06:10.911539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.911691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-18 12:06:10.911730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.911872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-18 12:06:10.911910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.912043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-18 12:06:10.912093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.912222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-18 12:06:10.912263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.912447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-18 12:06:10.912481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.912608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-18 12:06:10.912643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.912748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-18 12:06:10.912782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.912936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-18 12:06:10.912993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.913182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-18 12:06:10.913243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.913415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-18 12:06:10.913449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.913617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-18 12:06:10.913656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.913824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-18 12:06:10.913877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.914073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-18 12:06:10.914134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.914339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-18 12:06:10.914377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.914572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-18 12:06:10.914607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.914723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-18 12:06:10.914758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.915064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-18 12:06:10.915101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.915265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-18 12:06:10.915328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.915450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-18 12:06:10.915487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.915647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-18 12:06:10.915681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.915865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-18 12:06:10.915902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.916048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-18 12:06:10.916086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-18 12:06:10.916255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-18 12:06:10.916293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-18 12:06:10.916496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-18 12:06:10.916564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-18 12:06:10.916697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-18 12:06:10.916745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-18 12:06:10.916929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-18 12:06:10.916984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-18 12:06:10.917163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-18 12:06:10.917200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-18 12:06:10.917360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-18 12:06:10.917394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-18 12:06:10.917557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-18 12:06:10.917592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-18 12:06:10.917722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-18 12:06:10.917757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-18 12:06:10.917899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-18 12:06:10.917952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-18 12:06:10.918132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-18 12:06:10.918192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-18 12:06:10.918371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-18 12:06:10.918409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-18 12:06:10.918529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-18 12:06:10.918580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-18 12:06:10.918718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-18 12:06:10.918752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-18 12:06:10.918879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-18 12:06:10.918938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-18 12:06:10.919103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-18 12:06:10.919151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-18 12:06:10.919263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-18 12:06:10.919300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-18 12:06:10.919421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-18 12:06:10.919459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-18 12:06:10.919616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-18 12:06:10.919664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-18 12:06:10.919835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-18 12:06:10.919876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-18 12:06:10.920038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-18 12:06:10.920077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-18 12:06:10.920229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-18 12:06:10.920267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-18 12:06:10.920441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-18 12:06:10.920497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-18 12:06:10.920671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-18 12:06:10.920708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-18 12:06:10.920910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-18 12:06:10.920963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-18 12:06:10.921181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-18 12:06:10.921240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-18 12:06:10.921411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-18 12:06:10.921444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-18 12:06:10.921579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-18 12:06:10.921613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-18 12:06:10.921753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-18 12:06:10.921806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-18 12:06:10.921954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-18 12:06:10.922023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-18 12:06:10.922284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-18 12:06:10.922341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-18 12:06:10.922533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-18 12:06:10.922567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-18 12:06:10.922694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-18 12:06:10.922728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-18 12:06:10.922905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-18 12:06:10.922974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-18 12:06:10.923136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-18 12:06:10.923199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-18 12:06:10.923321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-18 12:06:10.923358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-18 12:06:10.923511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-18 12:06:10.923545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-18 12:06:10.923649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-18 12:06:10.923683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-18 12:06:10.923848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-18 12:06:10.923885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-18 12:06:10.924014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-18 12:06:10.924067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-18 12:06:10.924221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-18 12:06:10.924259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-18 12:06:10.924426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-18 12:06:10.924479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-18 12:06:10.924641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-18 12:06:10.924689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-18 12:06:10.924835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-18 12:06:10.924889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-18 12:06:10.925060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-18 12:06:10.925124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-18 12:06:10.925236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-18 12:06:10.925275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-18 12:06:10.925398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-18 12:06:10.925435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-18 12:06:10.925612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-18 12:06:10.925646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-18 12:06:10.925753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-18 12:06:10.925807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-18 12:06:10.925937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-18 12:06:10.925970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-18 12:06:10.926110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-18 12:06:10.926147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-18 12:06:10.926344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-18 12:06:10.926380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-18 12:06:10.926537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-18 12:06:10.926585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-18 12:06:10.926732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-18 12:06:10.926770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-18 12:06:10.926894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-18 12:06:10.926944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-18 12:06:10.927104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-18 12:06:10.927143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-18 12:06:10.927316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-18 12:06:10.927354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-18 12:06:10.927498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-18 12:06:10.927533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-18 12:06:10.927665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-18 12:06:10.927699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-18 12:06:10.927916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-18 12:06:10.927972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-18 12:06:10.928140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-18 12:06:10.928208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-18 12:06:10.928326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-18 12:06:10.928366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-18 12:06:10.928558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-18 12:06:10.928607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-18 12:06:10.928738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-18 12:06:10.928786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-18 12:06:10.928979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-18 12:06:10.929034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-18 12:06:10.929226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-18 12:06:10.929282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-18 12:06:10.929388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-18 12:06:10.929423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-18 12:06:10.929542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-18 12:06:10.929578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-18 12:06:10.929721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-18 12:06:10.929754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-18 12:06:10.929897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-18 12:06:10.929931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-18 12:06:10.930034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-18 12:06:10.930067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-18 12:06:10.930194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.930242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.930404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.930452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.930573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.930608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.930725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.930763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.930910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.930949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.931115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.931177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.931310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.931344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.931508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.931544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.931696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.931745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.931894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.931931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.932047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.932082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.932303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.932343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.932514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.932550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.932679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.932713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.932865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.932903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.933043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.933097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.933215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.933254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.933413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.933447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.933578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.933626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.933784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.933851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.934090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.934149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.934322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.934360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.934467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.934537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.934692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.934730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.934847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.934898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.935073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.935132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.935329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.935426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.935612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.935646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.935774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.935822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.936169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.936210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.936358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.936397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.936582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.936616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.936742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.936779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.936920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.936959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.937105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.937144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.937321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.937359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.937543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.937591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.937765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-18 12:06:10.937820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-18 12:06:10.938027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-18 12:06:10.938064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-18 12:06:10.938198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-18 12:06:10.938231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-18 12:06:10.938429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-18 12:06:10.938466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-18 12:06:10.938608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-18 12:06:10.938642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-18 12:06:10.938821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-18 12:06:10.938858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-18 12:06:10.938973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-18 12:06:10.939011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-18 12:06:10.939190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-18 12:06:10.939228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-18 12:06:10.939404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-18 12:06:10.939443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-18 12:06:10.939609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-18 12:06:10.939658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-18 12:06:10.939827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-18 12:06:10.939874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-18 12:06:10.940018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-18 12:06:10.940056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-18 12:06:10.940263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-18 12:06:10.940320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-18 12:06:10.940464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-18 12:06:10.940516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-18 12:06:10.940653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-18 12:06:10.940687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-18 12:06:10.940823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-18 12:06:10.940876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-18 12:06:10.941019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-18 12:06:10.941057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-18 12:06:10.941253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-18 12:06:10.941291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-18 12:06:10.941418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-18 12:06:10.941471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-18 12:06:10.941655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-18 12:06:10.941704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-18 12:06:10.941854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-18 12:06:10.941903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-18 12:06:10.942089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-18 12:06:10.942155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-18 12:06:10.942340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-18 12:06:10.942376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-18 12:06:10.942520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-18 12:06:10.942555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-18 12:06:10.942686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-18 12:06:10.942739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-18 12:06:10.942901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-18 12:06:10.942936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-18 12:06:10.943074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-18 12:06:10.943109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-18 12:06:10.943229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-18 12:06:10.943265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-18 12:06:10.943402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-18 12:06:10.943437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-18 12:06:10.943566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-18 12:06:10.943615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-18 12:06:10.943780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-18 12:06:10.943834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-18 12:06:10.943967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-18 12:06:10.944020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-18 12:06:10.944235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-18 12:06:10.944292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-18 12:06:10.944422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-18 12:06:10.944456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-18 12:06:10.944599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-18 12:06:10.944654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-18 12:06:10.944781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-18 12:06:10.944833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-18 12:06:10.944996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-18 12:06:10.945030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-18 12:06:10.945143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-18 12:06:10.945176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-18 12:06:10.945294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-18 12:06:10.945328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-18 12:06:10.945444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-18 12:06:10.945478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-18 12:06:10.945627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-18 12:06:10.945661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-18 12:06:10.945835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-18 12:06:10.945871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-18 12:06:10.946002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-18 12:06:10.946036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-18 12:06:10.946165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-18 12:06:10.946212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-18 12:06:10.946346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-18 12:06:10.946382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-18 12:06:10.946537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-18 12:06:10.946585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-18 12:06:10.946735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-18 12:06:10.946770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-18 12:06:10.946938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-18 12:06:10.946970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-18 12:06:10.947073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-18 12:06:10.947107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-18 12:06:10.947239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-18 12:06:10.947272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-18 12:06:10.947388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-18 12:06:10.947422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-18 12:06:10.947574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-18 12:06:10.947609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-18 12:06:10.947737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-18 12:06:10.947790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-18 12:06:10.947920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-18 12:06:10.947982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-18 12:06:10.948145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-18 12:06:10.948179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-18 12:06:10.948297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-18 12:06:10.948332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-18 12:06:10.948441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-18 12:06:10.948475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-18 12:06:10.948649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-18 12:06:10.948684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-18 12:06:10.948791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-18 12:06:10.948825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-18 12:06:10.948985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-18 12:06:10.949019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-18 12:06:10.949179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-18 12:06:10.949218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-18 12:06:10.949414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-18 12:06:10.949468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-18 12:06:10.949648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-18 12:06:10.949702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-18 12:06:10.949915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-18 12:06:10.949970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-18 12:06:10.950106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-18 12:06:10.950161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-18 12:06:10.950274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-18 12:06:10.950308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-18 12:06:10.950446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-18 12:06:10.950481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-18 12:06:10.950672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-18 12:06:10.950740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-18 12:06:10.950864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-18 12:06:10.950904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-18 12:06:10.951038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-18 12:06:10.951104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-18 12:06:10.951282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-18 12:06:10.951320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-18 12:06:10.951476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-18 12:06:10.951518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-18 12:06:10.951697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-18 12:06:10.951744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-18 12:06:10.951942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-18 12:06:10.952008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-18 12:06:10.952187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-18 12:06:10.952257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-18 12:06:10.952387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-18 12:06:10.952425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-18 12:06:10.952606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-18 12:06:10.952655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-18 12:06:10.952845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-18 12:06:10.952900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-18 12:06:10.953063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-18 12:06:10.953116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-18 12:06:10.953321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-18 12:06:10.953356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-18 12:06:10.953506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-18 12:06:10.953541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-18 12:06:10.953645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-18 12:06:10.953679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-18 12:06:10.953813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-18 12:06:10.953849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-18 12:06:10.953992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-18 12:06:10.954026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-18 12:06:10.954174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-18 12:06:10.954210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-18 12:06:10.954322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-18 12:06:10.954357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-18 12:06:10.954498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-18 12:06:10.954533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-18 12:06:10.954694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-18 12:06:10.954727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-18 12:06:10.954864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-18 12:06:10.954898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-18 12:06:10.955003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-18 12:06:10.955036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-18 12:06:10.955199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-18 12:06:10.955252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-18 12:06:10.955380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-18 12:06:10.955413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-18 12:06:10.955530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-18 12:06:10.955565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-18 12:06:10.955674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-18 12:06:10.955716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-18 12:06:10.955817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-18 12:06:10.955851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-18 12:06:10.955953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-18 12:06:10.955986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-18 12:06:10.956166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-18 12:06:10.956200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-18 12:06:10.956335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-18 12:06:10.956369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-18 12:06:10.956550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-18 12:06:10.956598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-18 12:06:10.956785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-18 12:06:10.956841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-18 12:06:10.957009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-18 12:06:10.957072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-18 12:06:10.957258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-18 12:06:10.957296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-18 12:06:10.957416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-18 12:06:10.957451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-18 12:06:10.957586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-18 12:06:10.957635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-18 12:06:10.957745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-18 12:06:10.957796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-18 12:06:10.957971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-18 12:06:10.958009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-18 12:06:10.958187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-18 12:06:10.958224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-18 12:06:10.958379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-18 12:06:10.958417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-18 12:06:10.958608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-18 12:06:10.958656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-18 12:06:10.958795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-18 12:06:10.958834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-18 12:06:10.958947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-18 12:06:10.958986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-18 12:06:10.959112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-18 12:06:10.959150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-18 12:06:10.959254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-18 12:06:10.959291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-18 12:06:10.959405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-18 12:06:10.959442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-18 12:06:10.959574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-18 12:06:10.959609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-18 12:06:10.959767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-18 12:06:10.959805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-18 12:06:10.959920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-18 12:06:10.959958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-18 12:06:10.960127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-18 12:06:10.960166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-18 12:06:10.960273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-18 12:06:10.960312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-18 12:06:10.960498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-18 12:06:10.960552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-18 12:06:10.960691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-18 12:06:10.960726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-18 12:06:10.960880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-18 12:06:10.960936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-18 12:06:10.961046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-18 12:06:10.961081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-18 12:06:10.961227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-18 12:06:10.961280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-18 12:06:10.961406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-18 12:06:10.961446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-18 12:06:10.961596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-18 12:06:10.961632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-18 12:06:10.961794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-18 12:06:10.961832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-18 12:06:10.962004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-18 12:06:10.962073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-18 12:06:10.962188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-18 12:06:10.962227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-18 12:06:10.962358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-18 12:06:10.962392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-18 12:06:10.962524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-18 12:06:10.962560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.962694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.962729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.962845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.962881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.963055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.963114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.963256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.963293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.963415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.963453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.963616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.963649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.963822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.963860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.964000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.964037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.964174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.964226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.964356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.964390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.964547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.964581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.964683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.964717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.964823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.964857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.965046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.965083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.965209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.965259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.965458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.965537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.965695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.965731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.965918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.965953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.966088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.966134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.966315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.966353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.966508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.966543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.966704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.966739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.966935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.966978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.967183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.967221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.967369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.967418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.967554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.967588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.967721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.967754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.967888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.967941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.968085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.968121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.968296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.968333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.968502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.968536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.968711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.968747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.968937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.968974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.969094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.969130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.969267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.969303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.969451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.969484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.969689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.969738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-18 12:06:10.969859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-18 12:06:10.969895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.970081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-18 12:06:10.970119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.970238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-18 12:06:10.970288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.970474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-18 12:06:10.970520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.970652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-18 12:06:10.970686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.970825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-18 12:06:10.970881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.971037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-18 12:06:10.971074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.971255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-18 12:06:10.971292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.971463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-18 12:06:10.971507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.971661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-18 12:06:10.971693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.971860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-18 12:06:10.971923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.972098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-18 12:06:10.972135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.972243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-18 12:06:10.972279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.972456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-18 12:06:10.972511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.972633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-18 12:06:10.972681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.972835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-18 12:06:10.972900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.973076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-18 12:06:10.973142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.973327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-18 12:06:10.973392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.973590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-18 12:06:10.973624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.973743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-18 12:06:10.973778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.973960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-18 12:06:10.974026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.974142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-18 12:06:10.974178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.974298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-18 12:06:10.974335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.974480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-18 12:06:10.974538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.974668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-18 12:06:10.974717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.974884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-18 12:06:10.974925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.975098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-18 12:06:10.975136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.975265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-18 12:06:10.975300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.975474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-18 12:06:10.975549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.975695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-18 12:06:10.975730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.975890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-18 12:06:10.975928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.976068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-18 12:06:10.976133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.976288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-18 12:06:10.976325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.976449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-18 12:06:10.976481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.976625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-18 12:06:10.976658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.976832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-18 12:06:10.976868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.976995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-18 12:06:10.977044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.977217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-18 12:06:10.977258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-18 12:06:10.977403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-18 12:06:10.977437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-18 12:06:10.977606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-18 12:06:10.977640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-18 12:06:10.977807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-18 12:06:10.977843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-18 12:06:10.978047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-18 12:06:10.978083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-18 12:06:10.978215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-18 12:06:10.978279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-18 12:06:10.978432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-18 12:06:10.978465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-18 12:06:10.978610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-18 12:06:10.978644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-18 12:06:10.978772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-18 12:06:10.978809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-18 12:06:10.978920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-18 12:06:10.978954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-18 12:06:10.979083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-18 12:06:10.979120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-18 12:06:10.979284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-18 12:06:10.979338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-18 12:06:10.979540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-18 12:06:10.979577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-18 12:06:10.979714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-18 12:06:10.979762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-18 12:06:10.979956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-18 12:06:10.979995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-18 12:06:10.980138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-18 12:06:10.980175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-18 12:06:10.980352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-18 12:06:10.980405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-18 12:06:10.980553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-18 12:06:10.980587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-18 12:06:10.980690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-18 12:06:10.980743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-18 12:06:10.980851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-18 12:06:10.980887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-18 12:06:10.981065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-18 12:06:10.981128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-18 12:06:10.981281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-18 12:06:10.981317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-18 12:06:10.981465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-18 12:06:10.981509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-18 12:06:10.981696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-18 12:06:10.981752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-18 12:06:10.981908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-18 12:06:10.981948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-18 12:06:10.982149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-18 12:06:10.982201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-18 12:06:10.982319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-18 12:06:10.982354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-18 12:06:10.982541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-18 12:06:10.982606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-18 12:06:10.982767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-18 12:06:10.982807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-18 12:06:10.983027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-18 12:06:10.983087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-18 12:06:10.983260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-18 12:06:10.983324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-18 12:06:10.983468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-18 12:06:10.983514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-18 12:06:10.983636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-18 12:06:10.983671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-18 12:06:10.983829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.983882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.984013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.984065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.984247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.984304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.984473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.984515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.984679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.984724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.984936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.984996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.985189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.985251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.985376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.985414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.985575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.985609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.985741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.985775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.985909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.985942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.986158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.986217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.986355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.986389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.986550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.986598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.986745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.986801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.986960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.987015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.987229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.987266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.987424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.987461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.987601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.987635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.987759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.987797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.987939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.987976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.988138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.988191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.988374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.988422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.988597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.988646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.988757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.988807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.988955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.988993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.989169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.989206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.989349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.989398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.989537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.989575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.989697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.989732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.989885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.989922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.990123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.990161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.990412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.990470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.990625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.990659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.990792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.990825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.990945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.990982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.991160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.991197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.991311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.991348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-18 12:06:10.991521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-18 12:06:10.991569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-18 12:06:10.991709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-18 12:06:10.991745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-18 12:06:10.991871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-18 12:06:10.991924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-18 12:06:10.992071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-18 12:06:10.992105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-18 12:06:10.992215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-18 12:06:10.992250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-18 12:06:10.992359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-18 12:06:10.992393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-18 12:06:10.992499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-18 12:06:10.992535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-18 12:06:10.992646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-18 12:06:10.992680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-18 12:06:10.992796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-18 12:06:10.992832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-18 12:06:10.992940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-18 12:06:10.992975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-18 12:06:10.993085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-18 12:06:10.993118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-18 12:06:10.993251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-18 12:06:10.993285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-18 12:06:10.993444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-18 12:06:10.993498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-18 12:06:10.993647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-18 12:06:10.993696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-18 12:06:10.993936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-18 12:06:10.993998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-18 12:06:10.994224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-18 12:06:10.994264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-18 12:06:10.994400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-18 12:06:10.994439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-18 12:06:10.994579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-18 12:06:10.994620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-18 12:06:10.994760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-18 12:06:10.994794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-18 12:06:10.994907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-18 12:06:10.994941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-18 12:06:10.995099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-18 12:06:10.995137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-18 12:06:10.995285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-18 12:06:10.995323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-18 12:06:10.995449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-18 12:06:10.995487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-18 12:06:10.995666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-18 12:06:10.995703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-18 12:06:10.995866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-18 12:06:10.995919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-18 12:06:10.996054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-18 12:06:10.996092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-18 12:06:10.996241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-18 12:06:10.996279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-18 12:06:10.996399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-18 12:06:10.996437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-18 12:06:10.996621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-18 12:06:10.996669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-18 12:06:10.996823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-18 12:06:10.996880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-18 12:06:10.996983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-18 12:06:10.997018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-18 12:06:10.997206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-18 12:06:10.997257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-18 12:06:10.997378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-18 12:06:10.997413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-18 12:06:10.997567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-18 12:06:10.997620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-18 12:06:10.997752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-18 12:06:10.997792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-18 12:06:10.997917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.501 [2024-11-18 12:06:10.997956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.501 qpair failed and we were unable to recover it. 00:37:45.501 [2024-11-18 12:06:10.998142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.501 [2024-11-18 12:06:10.998180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.501 qpair failed and we were unable to recover it. 00:37:45.501 [2024-11-18 12:06:10.998302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.501 [2024-11-18 12:06:10.998340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.501 qpair failed and we were unable to recover it. 00:37:45.501 [2024-11-18 12:06:10.998514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.501 [2024-11-18 12:06:10.998549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.501 qpair failed and we were unable to recover it. 00:37:45.501 [2024-11-18 12:06:10.998687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.501 [2024-11-18 12:06:10.998721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.501 qpair failed and we were unable to recover it. 00:37:45.501 [2024-11-18 12:06:10.998864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.501 [2024-11-18 12:06:10.998901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.501 qpair failed and we were unable to recover it. 00:37:45.501 [2024-11-18 12:06:10.999058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.502 [2024-11-18 12:06:10.999096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.502 qpair failed and we were unable to recover it. 00:37:45.502 [2024-11-18 12:06:10.999250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.502 [2024-11-18 12:06:10.999289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.502 qpair failed and we were unable to recover it. 00:37:45.502 [2024-11-18 12:06:10.999442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.502 [2024-11-18 12:06:10.999524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.502 qpair failed and we were unable to recover it. 00:37:45.502 [2024-11-18 12:06:10.999673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.502 [2024-11-18 12:06:10.999721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.502 qpair failed and we were unable to recover it. 00:37:45.502 [2024-11-18 12:06:10.999871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.502 [2024-11-18 12:06:10.999912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.502 qpair failed and we were unable to recover it. 00:37:45.502 [2024-11-18 12:06:11.000032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.502 [2024-11-18 12:06:11.000071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.502 qpair failed and we were unable to recover it. 00:37:45.502 [2024-11-18 12:06:11.000255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.502 [2024-11-18 12:06:11.000316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.502 qpair failed and we were unable to recover it. 00:37:45.502 [2024-11-18 12:06:11.000467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.502 [2024-11-18 12:06:11.000511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.502 qpair failed and we were unable to recover it. 00:37:45.502 [2024-11-18 12:06:11.000645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.502 [2024-11-18 12:06:11.000680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.502 qpair failed and we were unable to recover it. 00:37:45.502 [2024-11-18 12:06:11.000784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.502 [2024-11-18 12:06:11.000818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.502 qpair failed and we were unable to recover it. 00:37:45.502 [2024-11-18 12:06:11.000945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.502 [2024-11-18 12:06:11.000981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.502 qpair failed and we were unable to recover it. 00:37:45.502 [2024-11-18 12:06:11.001106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.502 [2024-11-18 12:06:11.001144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.502 qpair failed and we were unable to recover it. 00:37:45.502 [2024-11-18 12:06:11.001252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.502 [2024-11-18 12:06:11.001290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.502 qpair failed and we were unable to recover it. 00:37:45.502 [2024-11-18 12:06:11.001416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.502 [2024-11-18 12:06:11.001449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.502 qpair failed and we were unable to recover it. 00:37:45.502 [2024-11-18 12:06:11.001597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.502 [2024-11-18 12:06:11.001631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.502 qpair failed and we were unable to recover it. 00:37:45.502 [2024-11-18 12:06:11.001747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.502 [2024-11-18 12:06:11.001796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.502 qpair failed and we were unable to recover it. 00:37:45.502 [2024-11-18 12:06:11.001925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.502 [2024-11-18 12:06:11.001969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.502 qpair failed and we were unable to recover it. 00:37:45.502 [2024-11-18 12:06:11.002116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.502 [2024-11-18 12:06:11.002153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.502 qpair failed and we were unable to recover it. 00:37:45.502 [2024-11-18 12:06:11.002282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.502 [2024-11-18 12:06:11.002319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.502 qpair failed and we were unable to recover it. 00:37:45.502 [2024-11-18 12:06:11.002443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.502 [2024-11-18 12:06:11.002478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.502 qpair failed and we were unable to recover it. 00:37:45.502 [2024-11-18 12:06:11.002594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.502 [2024-11-18 12:06:11.002629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.502 qpair failed and we were unable to recover it. 00:37:45.502 [2024-11-18 12:06:11.002772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.502 [2024-11-18 12:06:11.002805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.502 qpair failed and we were unable to recover it. 00:37:45.502 [2024-11-18 12:06:11.002930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.502 [2024-11-18 12:06:11.002967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.502 qpair failed and we were unable to recover it. 00:37:45.502 [2024-11-18 12:06:11.003086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.502 [2024-11-18 12:06:11.003124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.502 qpair failed and we were unable to recover it. 00:37:45.502 [2024-11-18 12:06:11.003242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.502 [2024-11-18 12:06:11.003282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.502 qpair failed and we were unable to recover it. 00:37:45.502 [2024-11-18 12:06:11.003428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.502 [2024-11-18 12:06:11.003467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.502 qpair failed and we were unable to recover it. 00:37:45.502 [2024-11-18 12:06:11.003636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.502 [2024-11-18 12:06:11.003684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.502 qpair failed and we were unable to recover it. 00:37:45.502 [2024-11-18 12:06:11.003827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.502 [2024-11-18 12:06:11.003861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.502 qpair failed and we were unable to recover it. 00:37:45.502 [2024-11-18 12:06:11.003985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.502 [2024-11-18 12:06:11.004023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.502 qpair failed and we were unable to recover it. 00:37:45.502 [2024-11-18 12:06:11.004165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.502 [2024-11-18 12:06:11.004203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.502 qpair failed and we were unable to recover it. 00:37:45.502 [2024-11-18 12:06:11.004331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.502 [2024-11-18 12:06:11.004380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.502 qpair failed and we were unable to recover it. 00:37:45.502 [2024-11-18 12:06:11.004514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.502 [2024-11-18 12:06:11.004548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.502 qpair failed and we were unable to recover it. 00:37:45.502 [2024-11-18 12:06:11.004685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.502 [2024-11-18 12:06:11.004719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.502 qpair failed and we were unable to recover it. 00:37:45.502 [2024-11-18 12:06:11.004829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.502 [2024-11-18 12:06:11.004863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.502 qpair failed and we were unable to recover it. 00:37:45.502 [2024-11-18 12:06:11.004998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.502 [2024-11-18 12:06:11.005032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.502 qpair failed and we were unable to recover it. 00:37:45.502 [2024-11-18 12:06:11.005167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.502 [2024-11-18 12:06:11.005205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.502 qpair failed and we were unable to recover it. 00:37:45.502 [2024-11-18 12:06:11.005346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.502 [2024-11-18 12:06:11.005384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.503 [2024-11-18 12:06:11.005563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.503 [2024-11-18 12:06:11.005611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.503 [2024-11-18 12:06:11.005750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.503 [2024-11-18 12:06:11.005790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.503 [2024-11-18 12:06:11.005936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.503 [2024-11-18 12:06:11.005973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.503 [2024-11-18 12:06:11.006084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.503 [2024-11-18 12:06:11.006121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.503 [2024-11-18 12:06:11.006238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.503 [2024-11-18 12:06:11.006275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.503 [2024-11-18 12:06:11.006387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.503 [2024-11-18 12:06:11.006424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.503 [2024-11-18 12:06:11.006574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.503 [2024-11-18 12:06:11.006609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.503 [2024-11-18 12:06:11.006747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.503 [2024-11-18 12:06:11.006781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.503 [2024-11-18 12:06:11.006896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.503 [2024-11-18 12:06:11.006934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.503 [2024-11-18 12:06:11.007071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.503 [2024-11-18 12:06:11.007108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.503 [2024-11-18 12:06:11.007229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.503 [2024-11-18 12:06:11.007266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.503 [2024-11-18 12:06:11.007408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.503 [2024-11-18 12:06:11.007447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.503 [2024-11-18 12:06:11.007645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.503 [2024-11-18 12:06:11.007681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.503 [2024-11-18 12:06:11.007797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.503 [2024-11-18 12:06:11.007835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.503 [2024-11-18 12:06:11.007951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.503 [2024-11-18 12:06:11.007987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.503 [2024-11-18 12:06:11.008147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.503 [2024-11-18 12:06:11.008201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.503 [2024-11-18 12:06:11.008335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.503 [2024-11-18 12:06:11.008372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.503 [2024-11-18 12:06:11.008548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.503 [2024-11-18 12:06:11.008582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.503 [2024-11-18 12:06:11.008689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.503 [2024-11-18 12:06:11.008724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.503 [2024-11-18 12:06:11.008857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.503 [2024-11-18 12:06:11.008897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.503 [2024-11-18 12:06:11.009029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.503 [2024-11-18 12:06:11.009064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.503 [2024-11-18 12:06:11.009200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.503 [2024-11-18 12:06:11.009232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.503 [2024-11-18 12:06:11.009364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.503 [2024-11-18 12:06:11.009398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.503 [2024-11-18 12:06:11.009509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.503 [2024-11-18 12:06:11.009543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.503 [2024-11-18 12:06:11.009652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.503 [2024-11-18 12:06:11.009688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.503 [2024-11-18 12:06:11.009856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.503 [2024-11-18 12:06:11.009891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.503 [2024-11-18 12:06:11.010055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.503 [2024-11-18 12:06:11.010108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.503 [2024-11-18 12:06:11.010240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.503 [2024-11-18 12:06:11.010279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.503 [2024-11-18 12:06:11.010407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.503 [2024-11-18 12:06:11.010441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.503 [2024-11-18 12:06:11.010575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.503 [2024-11-18 12:06:11.010624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.503 [2024-11-18 12:06:11.010754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.503 [2024-11-18 12:06:11.010792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.503 [2024-11-18 12:06:11.010939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.503 [2024-11-18 12:06:11.010976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.503 [2024-11-18 12:06:11.011117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.503 [2024-11-18 12:06:11.011181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.503 [2024-11-18 12:06:11.011336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.503 [2024-11-18 12:06:11.011372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.503 [2024-11-18 12:06:11.011518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.503 [2024-11-18 12:06:11.011568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.503 [2024-11-18 12:06:11.011724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.503 [2024-11-18 12:06:11.011761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.503 [2024-11-18 12:06:11.011904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.503 [2024-11-18 12:06:11.011940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.503 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.012051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.504 [2024-11-18 12:06:11.012087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.504 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.012205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.504 [2024-11-18 12:06:11.012242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.504 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.012384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.504 [2024-11-18 12:06:11.012420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.504 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.012545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.504 [2024-11-18 12:06:11.012579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.504 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.012698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.504 [2024-11-18 12:06:11.012735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.504 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.012851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.504 [2024-11-18 12:06:11.012888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.504 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.013027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.504 [2024-11-18 12:06:11.013063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.504 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.013171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.504 [2024-11-18 12:06:11.013207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.504 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.013322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.504 [2024-11-18 12:06:11.013359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.504 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.013519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.504 [2024-11-18 12:06:11.013585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.504 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.013728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.504 [2024-11-18 12:06:11.013769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.504 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.013894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.504 [2024-11-18 12:06:11.013933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.504 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.014160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.504 [2024-11-18 12:06:11.014221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.504 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.014362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.504 [2024-11-18 12:06:11.014399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.504 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.014563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.504 [2024-11-18 12:06:11.014598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.504 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.014734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.504 [2024-11-18 12:06:11.014769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.504 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.014922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.504 [2024-11-18 12:06:11.014960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.504 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.015088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.504 [2024-11-18 12:06:11.015125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.504 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.015272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.504 [2024-11-18 12:06:11.015309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.504 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.015432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.504 [2024-11-18 12:06:11.015465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.504 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.015621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.504 [2024-11-18 12:06:11.015670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.504 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.015822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.504 [2024-11-18 12:06:11.015860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.504 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.015983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.504 [2024-11-18 12:06:11.016028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.504 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.016146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.504 [2024-11-18 12:06:11.016185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.504 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.016311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.504 [2024-11-18 12:06:11.016351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.504 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.016524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.504 [2024-11-18 12:06:11.016562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.504 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.016725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.504 [2024-11-18 12:06:11.016760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.504 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.016920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.504 [2024-11-18 12:06:11.016954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.504 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.017081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.504 [2024-11-18 12:06:11.017133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.504 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.017292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.504 [2024-11-18 12:06:11.017330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.504 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.017469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.504 [2024-11-18 12:06:11.017515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.504 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.017620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.504 [2024-11-18 12:06:11.017655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.504 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.017791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.504 [2024-11-18 12:06:11.017824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.504 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.017961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.504 [2024-11-18 12:06:11.017995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.504 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.018131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.504 [2024-11-18 12:06:11.018164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.504 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.018303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.504 [2024-11-18 12:06:11.018369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.504 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.018486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.504 [2024-11-18 12:06:11.018531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.504 qpair failed and we were unable to recover it. 00:37:45.504 [2024-11-18 12:06:11.018634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.018667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.018783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.018816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.018952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.019028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.019249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.019306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.019421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.019457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.019621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.019660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.019816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.019871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.020055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.020118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.020298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.020363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.020485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.020540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.020658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.020693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.020803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.020858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.020991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.021043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.021157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.021193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.021339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.021376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.021506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.021540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.021681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.021720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.021881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.021916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.022044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.022083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.022238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.022278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.022425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.022464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.022637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.022673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.022836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.022874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.023021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.023071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.023205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.023242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.023389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.023431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.023568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.023603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.023733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.023765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.023891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.023928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.024068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.024106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.024255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.024307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.024460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.024517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.024674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.024712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.024840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.024879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.025075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.025113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.025321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.025359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.025517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.025569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.025707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.505 [2024-11-18 12:06:11.025740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.505 qpair failed and we were unable to recover it. 00:37:45.505 [2024-11-18 12:06:11.025944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.506 [2024-11-18 12:06:11.026004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.506 qpair failed and we were unable to recover it. 00:37:45.506 [2024-11-18 12:06:11.026153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.506 [2024-11-18 12:06:11.026190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.506 qpair failed and we were unable to recover it. 00:37:45.506 [2024-11-18 12:06:11.026321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.506 [2024-11-18 12:06:11.026354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.506 qpair failed and we were unable to recover it. 00:37:45.506 [2024-11-18 12:06:11.026563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.506 [2024-11-18 12:06:11.026597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.506 qpair failed and we were unable to recover it. 00:37:45.506 [2024-11-18 12:06:11.026708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.506 [2024-11-18 12:06:11.026742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.506 qpair failed and we were unable to recover it. 00:37:45.506 [2024-11-18 12:06:11.026852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.506 [2024-11-18 12:06:11.026885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.506 qpair failed and we were unable to recover it. 00:37:45.506 [2024-11-18 12:06:11.027042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.506 [2024-11-18 12:06:11.027079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.506 qpair failed and we were unable to recover it. 00:37:45.506 [2024-11-18 12:06:11.027263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.506 [2024-11-18 12:06:11.027299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.506 qpair failed and we were unable to recover it. 00:37:45.506 [2024-11-18 12:06:11.027418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.506 [2024-11-18 12:06:11.027455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.506 qpair failed and we were unable to recover it. 00:37:45.506 [2024-11-18 12:06:11.027633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.506 [2024-11-18 12:06:11.027681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.506 qpair failed and we were unable to recover it. 00:37:45.506 [2024-11-18 12:06:11.027808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.506 [2024-11-18 12:06:11.027844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.506 qpair failed and we were unable to recover it. 00:37:45.506 [2024-11-18 12:06:11.027978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.506 [2024-11-18 12:06:11.028032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.506 qpair failed and we were unable to recover it. 00:37:45.506 [2024-11-18 12:06:11.028153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.506 [2024-11-18 12:06:11.028192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.506 qpair failed and we were unable to recover it. 00:37:45.506 [2024-11-18 12:06:11.028319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.506 [2024-11-18 12:06:11.028371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.506 qpair failed and we were unable to recover it. 00:37:45.506 [2024-11-18 12:06:11.028582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.506 [2024-11-18 12:06:11.028631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.506 qpair failed and we were unable to recover it. 00:37:45.506 [2024-11-18 12:06:11.028758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.506 [2024-11-18 12:06:11.028811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.506 qpair failed and we were unable to recover it. 00:37:45.506 [2024-11-18 12:06:11.028949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.506 [2024-11-18 12:06:11.029018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.506 qpair failed and we were unable to recover it. 00:37:45.506 [2024-11-18 12:06:11.029174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.506 [2024-11-18 12:06:11.029237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.506 qpair failed and we were unable to recover it. 00:37:45.506 [2024-11-18 12:06:11.029355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.506 [2024-11-18 12:06:11.029392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.506 qpair failed and we were unable to recover it. 00:37:45.506 [2024-11-18 12:06:11.029544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.506 [2024-11-18 12:06:11.029578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.506 qpair failed and we were unable to recover it. 00:37:45.506 [2024-11-18 12:06:11.029689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.506 [2024-11-18 12:06:11.029723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.506 qpair failed and we were unable to recover it. 00:37:45.506 [2024-11-18 12:06:11.029883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.506 [2024-11-18 12:06:11.029921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.506 qpair failed and we were unable to recover it. 00:37:45.506 [2024-11-18 12:06:11.030071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.506 [2024-11-18 12:06:11.030109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.506 qpair failed and we were unable to recover it. 00:37:45.506 [2024-11-18 12:06:11.030230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.506 [2024-11-18 12:06:11.030267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.506 qpair failed and we were unable to recover it. 00:37:45.506 [2024-11-18 12:06:11.030409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.506 [2024-11-18 12:06:11.030447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.506 qpair failed and we were unable to recover it. 00:37:45.506 [2024-11-18 12:06:11.030588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.506 [2024-11-18 12:06:11.030622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.506 qpair failed and we were unable to recover it. 00:37:45.506 [2024-11-18 12:06:11.030717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.506 [2024-11-18 12:06:11.030770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.506 qpair failed and we were unable to recover it. 00:37:45.506 [2024-11-18 12:06:11.030907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.506 [2024-11-18 12:06:11.030953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.506 qpair failed and we were unable to recover it. 00:37:45.506 [2024-11-18 12:06:11.031064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.506 [2024-11-18 12:06:11.031101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.506 qpair failed and we were unable to recover it. 00:37:45.506 [2024-11-18 12:06:11.031238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.506 [2024-11-18 12:06:11.031275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.506 qpair failed and we were unable to recover it. 00:37:45.506 [2024-11-18 12:06:11.031415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.506 [2024-11-18 12:06:11.031452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.506 qpair failed and we were unable to recover it. 00:37:45.506 [2024-11-18 12:06:11.031588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.031622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.507 qpair failed and we were unable to recover it. 00:37:45.507 [2024-11-18 12:06:11.031756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.031789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.507 qpair failed and we were unable to recover it. 00:37:45.507 [2024-11-18 12:06:11.031969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.032023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.507 qpair failed and we were unable to recover it. 00:37:45.507 [2024-11-18 12:06:11.032265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.032309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.507 qpair failed and we were unable to recover it. 00:37:45.507 [2024-11-18 12:06:11.032503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.032550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.507 qpair failed and we were unable to recover it. 00:37:45.507 [2024-11-18 12:06:11.032692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.032727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.507 qpair failed and we were unable to recover it. 00:37:45.507 [2024-11-18 12:06:11.032832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.032865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.507 qpair failed and we were unable to recover it. 00:37:45.507 [2024-11-18 12:06:11.032997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.033048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.507 qpair failed and we were unable to recover it. 00:37:45.507 [2024-11-18 12:06:11.033192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.033229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.507 qpair failed and we were unable to recover it. 00:37:45.507 [2024-11-18 12:06:11.033354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.033407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.507 qpair failed and we were unable to recover it. 00:37:45.507 [2024-11-18 12:06:11.033540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.033592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.507 qpair failed and we were unable to recover it. 00:37:45.507 [2024-11-18 12:06:11.033773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.033810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.507 qpair failed and we were unable to recover it. 00:37:45.507 [2024-11-18 12:06:11.033959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.034012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.507 qpair failed and we were unable to recover it. 00:37:45.507 [2024-11-18 12:06:11.034187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.034225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.507 qpair failed and we were unable to recover it. 00:37:45.507 [2024-11-18 12:06:11.034365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.034399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.507 qpair failed and we were unable to recover it. 00:37:45.507 [2024-11-18 12:06:11.034561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.034595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.507 qpair failed and we were unable to recover it. 00:37:45.507 [2024-11-18 12:06:11.034775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.034813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.507 qpair failed and we were unable to recover it. 00:37:45.507 [2024-11-18 12:06:11.034928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.034967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.507 qpair failed and we were unable to recover it. 00:37:45.507 [2024-11-18 12:06:11.035142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.035179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.507 qpair failed and we were unable to recover it. 00:37:45.507 [2024-11-18 12:06:11.035298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.035335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.507 qpair failed and we were unable to recover it. 00:37:45.507 [2024-11-18 12:06:11.035473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.035514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.507 qpair failed and we were unable to recover it. 00:37:45.507 [2024-11-18 12:06:11.035620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.035653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.507 qpair failed and we were unable to recover it. 00:37:45.507 [2024-11-18 12:06:11.035795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.035847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.507 qpair failed and we were unable to recover it. 00:37:45.507 [2024-11-18 12:06:11.035967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.036009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.507 qpair failed and we were unable to recover it. 00:37:45.507 [2024-11-18 12:06:11.036215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.036252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.507 qpair failed and we were unable to recover it. 00:37:45.507 [2024-11-18 12:06:11.036365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.036402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.507 qpair failed and we were unable to recover it. 00:37:45.507 [2024-11-18 12:06:11.036550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.036603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.507 qpair failed and we were unable to recover it. 00:37:45.507 [2024-11-18 12:06:11.036723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.036757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.507 qpair failed and we were unable to recover it. 00:37:45.507 [2024-11-18 12:06:11.036936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.036974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.507 qpair failed and we were unable to recover it. 00:37:45.507 [2024-11-18 12:06:11.037117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.037154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.507 qpair failed and we were unable to recover it. 00:37:45.507 [2024-11-18 12:06:11.037271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.037308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.507 qpair failed and we were unable to recover it. 00:37:45.507 [2024-11-18 12:06:11.037440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.037516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.507 qpair failed and we were unable to recover it. 00:37:45.507 [2024-11-18 12:06:11.037655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.037690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.507 qpair failed and we were unable to recover it. 00:37:45.507 [2024-11-18 12:06:11.037814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.037865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.507 qpair failed and we were unable to recover it. 00:37:45.507 [2024-11-18 12:06:11.038015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.038052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.507 qpair failed and we were unable to recover it. 00:37:45.507 [2024-11-18 12:06:11.038213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.038251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.507 qpair failed and we were unable to recover it. 00:37:45.507 [2024-11-18 12:06:11.038406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.038439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.507 qpair failed and we were unable to recover it. 00:37:45.507 [2024-11-18 12:06:11.038580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.507 [2024-11-18 12:06:11.038629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.038736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.508 [2024-11-18 12:06:11.038771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.038904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.508 [2024-11-18 12:06:11.038939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.039041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.508 [2024-11-18 12:06:11.039075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.039215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.508 [2024-11-18 12:06:11.039249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.039381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.508 [2024-11-18 12:06:11.039415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.039555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.508 [2024-11-18 12:06:11.039590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.039709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.508 [2024-11-18 12:06:11.039757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.039896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.508 [2024-11-18 12:06:11.039933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.040049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.508 [2024-11-18 12:06:11.040084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.040223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.508 [2024-11-18 12:06:11.040258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.040391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.508 [2024-11-18 12:06:11.040425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.040537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.508 [2024-11-18 12:06:11.040572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.040688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.508 [2024-11-18 12:06:11.040723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.040863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.508 [2024-11-18 12:06:11.040898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.041031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.508 [2024-11-18 12:06:11.041066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.041240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.508 [2024-11-18 12:06:11.041277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.041449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.508 [2024-11-18 12:06:11.041483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.041653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.508 [2024-11-18 12:06:11.041697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.041860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.508 [2024-11-18 12:06:11.041899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.042055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.508 [2024-11-18 12:06:11.042089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.042198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.508 [2024-11-18 12:06:11.042234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.042413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.508 [2024-11-18 12:06:11.042450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.042622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.508 [2024-11-18 12:06:11.042656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.042760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.508 [2024-11-18 12:06:11.042794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.042925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.508 [2024-11-18 12:06:11.042958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.043116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.508 [2024-11-18 12:06:11.043154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.043304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.508 [2024-11-18 12:06:11.043342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.043473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.508 [2024-11-18 12:06:11.043515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.043622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.508 [2024-11-18 12:06:11.043655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.043803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.508 [2024-11-18 12:06:11.043837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.043994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.508 [2024-11-18 12:06:11.044031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.044184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.508 [2024-11-18 12:06:11.044218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.044359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.508 [2024-11-18 12:06:11.044414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.044576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.508 [2024-11-18 12:06:11.044611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.044728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.508 [2024-11-18 12:06:11.044763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.044867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.508 [2024-11-18 12:06:11.044901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.045007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.508 [2024-11-18 12:06:11.045042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.508 qpair failed and we were unable to recover it. 00:37:45.508 [2024-11-18 12:06:11.045174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.045208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.509 qpair failed and we were unable to recover it. 00:37:45.509 [2024-11-18 12:06:11.045318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.045353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.509 qpair failed and we were unable to recover it. 00:37:45.509 [2024-11-18 12:06:11.045544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.045579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.509 qpair failed and we were unable to recover it. 00:37:45.509 [2024-11-18 12:06:11.045712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.045745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.509 qpair failed and we were unable to recover it. 00:37:45.509 [2024-11-18 12:06:11.045879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.045933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.509 qpair failed and we were unable to recover it. 00:37:45.509 [2024-11-18 12:06:11.046078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.046115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.509 qpair failed and we were unable to recover it. 00:37:45.509 [2024-11-18 12:06:11.046271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.046304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.509 qpair failed and we were unable to recover it. 00:37:45.509 [2024-11-18 12:06:11.046439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.046507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.509 qpair failed and we were unable to recover it. 00:37:45.509 [2024-11-18 12:06:11.046640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.046674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.509 qpair failed and we were unable to recover it. 00:37:45.509 [2024-11-18 12:06:11.046799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.046832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.509 qpair failed and we were unable to recover it. 00:37:45.509 [2024-11-18 12:06:11.046942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.046976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.509 qpair failed and we were unable to recover it. 00:37:45.509 [2024-11-18 12:06:11.047108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.047145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.509 qpair failed and we were unable to recover it. 00:37:45.509 [2024-11-18 12:06:11.047318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.047354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.509 qpair failed and we were unable to recover it. 00:37:45.509 [2024-11-18 12:06:11.047506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.047561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.509 qpair failed and we were unable to recover it. 00:37:45.509 [2024-11-18 12:06:11.047701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.047735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.509 qpair failed and we were unable to recover it. 00:37:45.509 [2024-11-18 12:06:11.047899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.047932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.509 qpair failed and we were unable to recover it. 00:37:45.509 [2024-11-18 12:06:11.048083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.048121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.509 qpair failed and we were unable to recover it. 00:37:45.509 [2024-11-18 12:06:11.048264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.048301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.509 qpair failed and we were unable to recover it. 00:37:45.509 [2024-11-18 12:06:11.048448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.048485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.509 qpair failed and we were unable to recover it. 00:37:45.509 [2024-11-18 12:06:11.048627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.048661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.509 qpair failed and we were unable to recover it. 00:37:45.509 [2024-11-18 12:06:11.048769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.048803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.509 qpair failed and we were unable to recover it. 00:37:45.509 [2024-11-18 12:06:11.048963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.048996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.509 qpair failed and we were unable to recover it. 00:37:45.509 [2024-11-18 12:06:11.049149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.049186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.509 qpair failed and we were unable to recover it. 00:37:45.509 [2024-11-18 12:06:11.049327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.049364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.509 qpair failed and we were unable to recover it. 00:37:45.509 [2024-11-18 12:06:11.049542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.049575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.509 qpair failed and we were unable to recover it. 00:37:45.509 [2024-11-18 12:06:11.049684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.049719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.509 qpair failed and we were unable to recover it. 00:37:45.509 [2024-11-18 12:06:11.049841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.049878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.509 qpair failed and we were unable to recover it. 00:37:45.509 [2024-11-18 12:06:11.050003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.050054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.509 qpair failed and we were unable to recover it. 00:37:45.509 [2024-11-18 12:06:11.050233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.050274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.509 qpair failed and we were unable to recover it. 00:37:45.509 [2024-11-18 12:06:11.050380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.050430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.509 qpair failed and we were unable to recover it. 00:37:45.509 [2024-11-18 12:06:11.050612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.050680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.509 qpair failed and we were unable to recover it. 00:37:45.509 [2024-11-18 12:06:11.050846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.050895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.509 qpair failed and we were unable to recover it. 00:37:45.509 [2024-11-18 12:06:11.051013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.051050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.509 qpair failed and we were unable to recover it. 00:37:45.509 [2024-11-18 12:06:11.051229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.051267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.509 qpair failed and we were unable to recover it. 00:37:45.509 [2024-11-18 12:06:11.051425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.051463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.509 qpair failed and we were unable to recover it. 00:37:45.509 [2024-11-18 12:06:11.051617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.051664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.509 qpair failed and we were unable to recover it. 00:37:45.509 [2024-11-18 12:06:11.051828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.051866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.509 qpair failed and we were unable to recover it. 00:37:45.509 [2024-11-18 12:06:11.052046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.509 [2024-11-18 12:06:11.052084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.052214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.510 [2024-11-18 12:06:11.052248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.052382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.510 [2024-11-18 12:06:11.052415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.052575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.510 [2024-11-18 12:06:11.052609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.052734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.510 [2024-11-18 12:06:11.052770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.052951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.510 [2024-11-18 12:06:11.052987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.053160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.510 [2024-11-18 12:06:11.053213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.053329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.510 [2024-11-18 12:06:11.053369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.053545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.510 [2024-11-18 12:06:11.053580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.053691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.510 [2024-11-18 12:06:11.053725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.053884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.510 [2024-11-18 12:06:11.053918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.054030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.510 [2024-11-18 12:06:11.054064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.054255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.510 [2024-11-18 12:06:11.054321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.054484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.510 [2024-11-18 12:06:11.054546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.054717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.510 [2024-11-18 12:06:11.054764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.054882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.510 [2024-11-18 12:06:11.054936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.055086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.510 [2024-11-18 12:06:11.055125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.055349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.510 [2024-11-18 12:06:11.055407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.055573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.510 [2024-11-18 12:06:11.055609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.055733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.510 [2024-11-18 12:06:11.055773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.055929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.510 [2024-11-18 12:06:11.055982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.056165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.510 [2024-11-18 12:06:11.056216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.056328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.510 [2024-11-18 12:06:11.056362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.056503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.510 [2024-11-18 12:06:11.056539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.056697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.510 [2024-11-18 12:06:11.056732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.056861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.510 [2024-11-18 12:06:11.056913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.057047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.510 [2024-11-18 12:06:11.057082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.057225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.510 [2024-11-18 12:06:11.057261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.057404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.510 [2024-11-18 12:06:11.057438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.057591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.510 [2024-11-18 12:06:11.057629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.057774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.510 [2024-11-18 12:06:11.057812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.057955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.510 [2024-11-18 12:06:11.058010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.058164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.510 [2024-11-18 12:06:11.058201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.058388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.510 [2024-11-18 12:06:11.058424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.058585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.510 [2024-11-18 12:06:11.058634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.058763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.510 [2024-11-18 12:06:11.058830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.058984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.510 [2024-11-18 12:06:11.059022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.059221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.510 [2024-11-18 12:06:11.059259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.510 qpair failed and we were unable to recover it. 00:37:45.510 [2024-11-18 12:06:11.059430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.511 [2024-11-18 12:06:11.059468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.511 qpair failed and we were unable to recover it. 00:37:45.511 [2024-11-18 12:06:11.059634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.511 [2024-11-18 12:06:11.059668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.511 qpair failed and we were unable to recover it. 00:37:45.511 [2024-11-18 12:06:11.059838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.511 [2024-11-18 12:06:11.059905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.511 qpair failed and we were unable to recover it. 00:37:45.511 [2024-11-18 12:06:11.060135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.511 [2024-11-18 12:06:11.060196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.511 qpair failed and we were unable to recover it. 00:37:45.511 [2024-11-18 12:06:11.060301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.511 [2024-11-18 12:06:11.060336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.511 qpair failed and we were unable to recover it. 00:37:45.511 [2024-11-18 12:06:11.060507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.511 [2024-11-18 12:06:11.060542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.511 qpair failed and we were unable to recover it. 00:37:45.511 [2024-11-18 12:06:11.060720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.511 [2024-11-18 12:06:11.060774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.511 qpair failed and we were unable to recover it. 00:37:45.511 [2024-11-18 12:06:11.060983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.511 [2024-11-18 12:06:11.061043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.511 qpair failed and we were unable to recover it. 00:37:45.511 [2024-11-18 12:06:11.061163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.511 [2024-11-18 12:06:11.061203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.511 qpair failed and we were unable to recover it. 00:37:45.511 [2024-11-18 12:06:11.061357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.511 [2024-11-18 12:06:11.061392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.511 qpair failed and we were unable to recover it. 00:37:45.511 [2024-11-18 12:06:11.061524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.511 [2024-11-18 12:06:11.061571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.511 qpair failed and we were unable to recover it. 00:37:45.511 [2024-11-18 12:06:11.061732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.511 [2024-11-18 12:06:11.061797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.511 qpair failed and we were unable to recover it. 00:37:45.511 [2024-11-18 12:06:11.062018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.511 [2024-11-18 12:06:11.062058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.511 qpair failed and we were unable to recover it. 00:37:45.511 [2024-11-18 12:06:11.062225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.511 [2024-11-18 12:06:11.062277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.511 qpair failed and we were unable to recover it. 00:37:45.511 [2024-11-18 12:06:11.062473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.511 [2024-11-18 12:06:11.062515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.511 qpair failed and we were unable to recover it. 00:37:45.511 [2024-11-18 12:06:11.062689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.511 [2024-11-18 12:06:11.062723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.511 qpair failed and we were unable to recover it. 00:37:45.511 [2024-11-18 12:06:11.062842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.511 [2024-11-18 12:06:11.062878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.511 qpair failed and we were unable to recover it. 00:37:45.511 [2024-11-18 12:06:11.063047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.511 [2024-11-18 12:06:11.063084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.511 qpair failed and we were unable to recover it. 00:37:45.511 [2024-11-18 12:06:11.063297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.511 [2024-11-18 12:06:11.063331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.511 qpair failed and we were unable to recover it. 00:37:45.511 [2024-11-18 12:06:11.063507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.511 [2024-11-18 12:06:11.063557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.511 qpair failed and we were unable to recover it. 00:37:45.511 [2024-11-18 12:06:11.063692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.511 [2024-11-18 12:06:11.063730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.511 qpair failed and we were unable to recover it. 00:37:45.511 [2024-11-18 12:06:11.063861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.511 [2024-11-18 12:06:11.063898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.511 qpair failed and we were unable to recover it. 00:37:45.511 [2024-11-18 12:06:11.064081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.511 [2024-11-18 12:06:11.064148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.511 qpair failed and we were unable to recover it. 00:37:45.511 [2024-11-18 12:06:11.064371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.511 [2024-11-18 12:06:11.064406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.511 qpair failed and we were unable to recover it. 00:37:45.511 [2024-11-18 12:06:11.064534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.511 [2024-11-18 12:06:11.064568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.511 qpair failed and we were unable to recover it. 00:37:45.511 [2024-11-18 12:06:11.064705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.511 [2024-11-18 12:06:11.064740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.511 qpair failed and we were unable to recover it. 00:37:45.511 [2024-11-18 12:06:11.064978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.511 [2024-11-18 12:06:11.065033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.511 qpair failed and we were unable to recover it. 00:37:45.511 [2024-11-18 12:06:11.065165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.511 [2024-11-18 12:06:11.065216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.511 qpair failed and we were unable to recover it. 00:37:45.511 [2024-11-18 12:06:11.065371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.511 [2024-11-18 12:06:11.065408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.511 qpair failed and we were unable to recover it. 00:37:45.511 [2024-11-18 12:06:11.065594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.511 [2024-11-18 12:06:11.065643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.511 qpair failed and we were unable to recover it. 00:37:45.511 [2024-11-18 12:06:11.065799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.511 [2024-11-18 12:06:11.065836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.511 qpair failed and we were unable to recover it. 00:37:45.511 [2024-11-18 12:06:11.065972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.511 [2024-11-18 12:06:11.066006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.511 qpair failed and we were unable to recover it. 00:37:45.511 [2024-11-18 12:06:11.066141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.511 [2024-11-18 12:06:11.066175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.066331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.512 [2024-11-18 12:06:11.066375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.066512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.512 [2024-11-18 12:06:11.066576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.066689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.512 [2024-11-18 12:06:11.066724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.066877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.512 [2024-11-18 12:06:11.066911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.067057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.512 [2024-11-18 12:06:11.067094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.067239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.512 [2024-11-18 12:06:11.067276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.067400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.512 [2024-11-18 12:06:11.067435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.067580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.512 [2024-11-18 12:06:11.067617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.067763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.512 [2024-11-18 12:06:11.067800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.067930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.512 [2024-11-18 12:06:11.067968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.068115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.512 [2024-11-18 12:06:11.068153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.068327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.512 [2024-11-18 12:06:11.068364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.068482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.512 [2024-11-18 12:06:11.068542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.068652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.512 [2024-11-18 12:06:11.068687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.068877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.512 [2024-11-18 12:06:11.068943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.069099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.512 [2024-11-18 12:06:11.069168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.069324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.512 [2024-11-18 12:06:11.069363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.069513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.512 [2024-11-18 12:06:11.069566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.069692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.512 [2024-11-18 12:06:11.069741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.069900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.512 [2024-11-18 12:06:11.069953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.070121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.512 [2024-11-18 12:06:11.070175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.070317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.512 [2024-11-18 12:06:11.070352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.070517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.512 [2024-11-18 12:06:11.070552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.070709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.512 [2024-11-18 12:06:11.070748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.070980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.512 [2024-11-18 12:06:11.071042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.071238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.512 [2024-11-18 12:06:11.071298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.071453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.512 [2024-11-18 12:06:11.071488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.071617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.512 [2024-11-18 12:06:11.071652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.071789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.512 [2024-11-18 12:06:11.071822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.071990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.512 [2024-11-18 12:06:11.072028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.072225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.512 [2024-11-18 12:06:11.072286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.072487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.512 [2024-11-18 12:06:11.072567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.072686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.512 [2024-11-18 12:06:11.072724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.072951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.512 [2024-11-18 12:06:11.073025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.073215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.512 [2024-11-18 12:06:11.073275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.073396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.512 [2024-11-18 12:06:11.073435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.073596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.512 [2024-11-18 12:06:11.073631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.512 qpair failed and we were unable to recover it. 00:37:45.512 [2024-11-18 12:06:11.073768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.073803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.513 [2024-11-18 12:06:11.073906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.073940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.513 [2024-11-18 12:06:11.074096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.074147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.513 [2024-11-18 12:06:11.074317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.074390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.513 [2024-11-18 12:06:11.074538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.074575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.513 [2024-11-18 12:06:11.074683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.074737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.513 [2024-11-18 12:06:11.074918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.074957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.513 [2024-11-18 12:06:11.075087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.075125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.513 [2024-11-18 12:06:11.075314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.075351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.513 [2024-11-18 12:06:11.075477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.075541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.513 [2024-11-18 12:06:11.075649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.075683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.513 [2024-11-18 12:06:11.075839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.075895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.513 [2024-11-18 12:06:11.076069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.076125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.513 [2024-11-18 12:06:11.076307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.076346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.513 [2024-11-18 12:06:11.076498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.076552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.513 [2024-11-18 12:06:11.076686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.076721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.513 [2024-11-18 12:06:11.076898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.076936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.513 [2024-11-18 12:06:11.077090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.077129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.513 [2024-11-18 12:06:11.077295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.077332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.513 [2024-11-18 12:06:11.077475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.077539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.513 [2024-11-18 12:06:11.077700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.077734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.513 [2024-11-18 12:06:11.077879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.077930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.513 [2024-11-18 12:06:11.078061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.078094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.513 [2024-11-18 12:06:11.078260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.078298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.513 [2024-11-18 12:06:11.078462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.078506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.513 [2024-11-18 12:06:11.078638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.078674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.513 [2024-11-18 12:06:11.078828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.078865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.513 [2024-11-18 12:06:11.079067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.079105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.513 [2024-11-18 12:06:11.079247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.079284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.513 [2024-11-18 12:06:11.079470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.079519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.513 [2024-11-18 12:06:11.079679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.079727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.513 [2024-11-18 12:06:11.079896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.079951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.513 [2024-11-18 12:06:11.080102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.080155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.513 [2024-11-18 12:06:11.080283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.080359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.513 [2024-11-18 12:06:11.080512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.080547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.513 [2024-11-18 12:06:11.080694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.080730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.513 [2024-11-18 12:06:11.080844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.080879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.513 [2024-11-18 12:06:11.081013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.513 [2024-11-18 12:06:11.081048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.513 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.081188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.514 [2024-11-18 12:06:11.081223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.514 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.081383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.514 [2024-11-18 12:06:11.081417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.514 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.081534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.514 [2024-11-18 12:06:11.081568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.514 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.081697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.514 [2024-11-18 12:06:11.081730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.514 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.081863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.514 [2024-11-18 12:06:11.081898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.514 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.082048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.514 [2024-11-18 12:06:11.082102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.514 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.082258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.514 [2024-11-18 12:06:11.082294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.514 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.082411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.514 [2024-11-18 12:06:11.082457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.514 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.082606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.514 [2024-11-18 12:06:11.082659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.514 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.082794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.514 [2024-11-18 12:06:11.082850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.514 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.083030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.514 [2024-11-18 12:06:11.083092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.514 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.083227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.514 [2024-11-18 12:06:11.083261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.514 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.083379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.514 [2024-11-18 12:06:11.083414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.514 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.083577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.514 [2024-11-18 12:06:11.083625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.514 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.083761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.514 [2024-11-18 12:06:11.083797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.514 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.083896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.514 [2024-11-18 12:06:11.083929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.514 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.084065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.514 [2024-11-18 12:06:11.084098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.514 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.084205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.514 [2024-11-18 12:06:11.084238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.514 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.084353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.514 [2024-11-18 12:06:11.084388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.514 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.084543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.514 [2024-11-18 12:06:11.084591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.514 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.084709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.514 [2024-11-18 12:06:11.084745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.514 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.084862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.514 [2024-11-18 12:06:11.084900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.514 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.085030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.514 [2024-11-18 12:06:11.085068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.514 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.085183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.514 [2024-11-18 12:06:11.085232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.514 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.085387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.514 [2024-11-18 12:06:11.085420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.514 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.085555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.514 [2024-11-18 12:06:11.085591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.514 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.085693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.514 [2024-11-18 12:06:11.085727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.514 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.085853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.514 [2024-11-18 12:06:11.085890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.514 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.086028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.514 [2024-11-18 12:06:11.086065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.514 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.086255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.514 [2024-11-18 12:06:11.086293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.514 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.086447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.514 [2024-11-18 12:06:11.086480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.514 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.086600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.514 [2024-11-18 12:06:11.086634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.514 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.086774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.514 [2024-11-18 12:06:11.086808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.514 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.086931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.514 [2024-11-18 12:06:11.086968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.514 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.087081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.514 [2024-11-18 12:06:11.087120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.514 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.087274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.514 [2024-11-18 12:06:11.087312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.514 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.087441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.514 [2024-11-18 12:06:11.087474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.514 qpair failed and we were unable to recover it. 00:37:45.514 [2024-11-18 12:06:11.087650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.515 [2024-11-18 12:06:11.087684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.515 qpair failed and we were unable to recover it. 00:37:45.515 [2024-11-18 12:06:11.087822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.515 [2024-11-18 12:06:11.087876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.515 qpair failed and we were unable to recover it. 00:37:45.515 [2024-11-18 12:06:11.087998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.515 [2024-11-18 12:06:11.088051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.515 qpair failed and we were unable to recover it. 00:37:45.515 [2024-11-18 12:06:11.088197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.515 [2024-11-18 12:06:11.088234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.515 qpair failed and we were unable to recover it. 00:37:45.515 [2024-11-18 12:06:11.088381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.515 [2024-11-18 12:06:11.088430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.515 qpair failed and we were unable to recover it. 00:37:45.515 [2024-11-18 12:06:11.088595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.515 [2024-11-18 12:06:11.088629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.515 qpair failed and we were unable to recover it. 00:37:45.515 [2024-11-18 12:06:11.088760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.515 [2024-11-18 12:06:11.088794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.515 qpair failed and we were unable to recover it. 00:37:45.515 [2024-11-18 12:06:11.088904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.515 [2024-11-18 12:06:11.088938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.515 qpair failed and we were unable to recover it. 00:37:45.515 [2024-11-18 12:06:11.089088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.515 [2024-11-18 12:06:11.089131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.515 qpair failed and we were unable to recover it. 00:37:45.515 [2024-11-18 12:06:11.089250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.515 [2024-11-18 12:06:11.089288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.515 qpair failed and we were unable to recover it. 00:37:45.515 [2024-11-18 12:06:11.089404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.515 [2024-11-18 12:06:11.089454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.515 qpair failed and we were unable to recover it. 00:37:45.515 [2024-11-18 12:06:11.089581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.515 [2024-11-18 12:06:11.089615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.515 qpair failed and we were unable to recover it. 00:37:45.515 [2024-11-18 12:06:11.089712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.515 [2024-11-18 12:06:11.089746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.515 qpair failed and we were unable to recover it. 00:37:45.515 [2024-11-18 12:06:11.089892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.515 [2024-11-18 12:06:11.089929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.515 qpair failed and we were unable to recover it. 00:37:45.515 [2024-11-18 12:06:11.090078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.515 [2024-11-18 12:06:11.090115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.515 qpair failed and we were unable to recover it. 00:37:45.515 [2024-11-18 12:06:11.090264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.515 [2024-11-18 12:06:11.090303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.515 qpair failed and we were unable to recover it. 00:37:45.515 [2024-11-18 12:06:11.090424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.515 [2024-11-18 12:06:11.090461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.515 qpair failed and we were unable to recover it. 00:37:45.515 [2024-11-18 12:06:11.090669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.515 [2024-11-18 12:06:11.090718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.515 qpair failed and we were unable to recover it. 00:37:45.515 [2024-11-18 12:06:11.090859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.515 [2024-11-18 12:06:11.090893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.515 qpair failed and we were unable to recover it. 00:37:45.515 [2024-11-18 12:06:11.091036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.515 [2024-11-18 12:06:11.091070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.515 qpair failed and we were unable to recover it. 00:37:45.515 [2024-11-18 12:06:11.091203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.515 [2024-11-18 12:06:11.091255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.515 qpair failed and we were unable to recover it. 00:37:45.515 [2024-11-18 12:06:11.091370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.515 [2024-11-18 12:06:11.091408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.515 qpair failed and we were unable to recover it. 00:37:45.515 [2024-11-18 12:06:11.091581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.515 [2024-11-18 12:06:11.091615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.515 qpair failed and we were unable to recover it. 00:37:45.515 [2024-11-18 12:06:11.091723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.515 [2024-11-18 12:06:11.091757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.515 qpair failed and we were unable to recover it. 00:37:45.515 [2024-11-18 12:06:11.091877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.515 [2024-11-18 12:06:11.091914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.515 qpair failed and we were unable to recover it. 00:37:45.515 [2024-11-18 12:06:11.092049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.515 [2024-11-18 12:06:11.092090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.515 qpair failed and we were unable to recover it. 00:37:45.515 [2024-11-18 12:06:11.092233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.515 [2024-11-18 12:06:11.092271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.515 qpair failed and we were unable to recover it. 00:37:45.515 [2024-11-18 12:06:11.092409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.515 [2024-11-18 12:06:11.092443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.515 qpair failed and we were unable to recover it. 00:37:45.515 [2024-11-18 12:06:11.092555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.515 [2024-11-18 12:06:11.092589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.515 qpair failed and we were unable to recover it. 00:37:45.515 [2024-11-18 12:06:11.092721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.515 [2024-11-18 12:06:11.092754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.515 qpair failed and we were unable to recover it. 00:37:45.515 [2024-11-18 12:06:11.092907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.515 [2024-11-18 12:06:11.092941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.515 qpair failed and we were unable to recover it. 00:37:45.515 [2024-11-18 12:06:11.093098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.515 [2024-11-18 12:06:11.093135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.515 qpair failed and we were unable to recover it. 00:37:45.515 [2024-11-18 12:06:11.093260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.515 [2024-11-18 12:06:11.093299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.515 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.093425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.516 [2024-11-18 12:06:11.093459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.516 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.093603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.516 [2024-11-18 12:06:11.093637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.516 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.093756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.516 [2024-11-18 12:06:11.093816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.516 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.093969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.516 [2024-11-18 12:06:11.094002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.516 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.094142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.516 [2024-11-18 12:06:11.094176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.516 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.094283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.516 [2024-11-18 12:06:11.094316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.516 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.094476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.516 [2024-11-18 12:06:11.094530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.516 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.094668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.516 [2024-11-18 12:06:11.094701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.516 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.094848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.516 [2024-11-18 12:06:11.094885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.516 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.095062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.516 [2024-11-18 12:06:11.095099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.516 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.095212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.516 [2024-11-18 12:06:11.095249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.516 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.095405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.516 [2024-11-18 12:06:11.095439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.516 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.095625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.516 [2024-11-18 12:06:11.095660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.516 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.095790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.516 [2024-11-18 12:06:11.095827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.516 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.096024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.516 [2024-11-18 12:06:11.096061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.516 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.096285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.516 [2024-11-18 12:06:11.096354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.516 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.096551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.516 [2024-11-18 12:06:11.096586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.516 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.096722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.516 [2024-11-18 12:06:11.096755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.516 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.096892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.516 [2024-11-18 12:06:11.096926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.516 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.097056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.516 [2024-11-18 12:06:11.097109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.516 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.097294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.516 [2024-11-18 12:06:11.097332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.516 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.097476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.516 [2024-11-18 12:06:11.097538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.516 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.097670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.516 [2024-11-18 12:06:11.097708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.516 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.097873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.516 [2024-11-18 12:06:11.097926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.516 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.098031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.516 [2024-11-18 12:06:11.098065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.516 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.098215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.516 [2024-11-18 12:06:11.098267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.516 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.098410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.516 [2024-11-18 12:06:11.098445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.516 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.098582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.516 [2024-11-18 12:06:11.098630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.516 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.098796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.516 [2024-11-18 12:06:11.098846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.516 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.099082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.516 [2024-11-18 12:06:11.099142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.516 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.099381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.516 [2024-11-18 12:06:11.099444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.516 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.099591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.516 [2024-11-18 12:06:11.099625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.516 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.099752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.516 [2024-11-18 12:06:11.099790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.516 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.099959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.516 [2024-11-18 12:06:11.100025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.516 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.100195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.516 [2024-11-18 12:06:11.100232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.516 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.100342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.516 [2024-11-18 12:06:11.100379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.516 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.100556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.516 [2024-11-18 12:06:11.100604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.516 qpair failed and we were unable to recover it. 00:37:45.516 [2024-11-18 12:06:11.100766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.100820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.517 [2024-11-18 12:06:11.101069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.101125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.517 [2024-11-18 12:06:11.101268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.101302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.517 [2024-11-18 12:06:11.101433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.101467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.517 [2024-11-18 12:06:11.101621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.101674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.517 [2024-11-18 12:06:11.101828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.101867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.517 [2024-11-18 12:06:11.101989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.102027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.517 [2024-11-18 12:06:11.102276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.102335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.517 [2024-11-18 12:06:11.102459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.102508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.517 [2024-11-18 12:06:11.102639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.102673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.517 [2024-11-18 12:06:11.102845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.102882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.517 [2024-11-18 12:06:11.103101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.103138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.517 [2024-11-18 12:06:11.103286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.103323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.517 [2024-11-18 12:06:11.103495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.103532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.517 [2024-11-18 12:06:11.103684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.103732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.517 [2024-11-18 12:06:11.103917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.103958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.517 [2024-11-18 12:06:11.104140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.104197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.517 [2024-11-18 12:06:11.104341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.104394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.517 [2024-11-18 12:06:11.104512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.104568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.517 [2024-11-18 12:06:11.104705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.104740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.517 [2024-11-18 12:06:11.104966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.105057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.517 [2024-11-18 12:06:11.105182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.105218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.517 [2024-11-18 12:06:11.105390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.105426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.517 [2024-11-18 12:06:11.105590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.105625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.517 [2024-11-18 12:06:11.105736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.105769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.517 [2024-11-18 12:06:11.105909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.105942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.517 [2024-11-18 12:06:11.106090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.106123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.517 [2024-11-18 12:06:11.106288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.106325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.517 [2024-11-18 12:06:11.106442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.106479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.517 [2024-11-18 12:06:11.106644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.106677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.517 [2024-11-18 12:06:11.106785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.106835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.517 [2024-11-18 12:06:11.106980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.107017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.517 [2024-11-18 12:06:11.107218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.107256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.517 [2024-11-18 12:06:11.107397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.107434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.517 [2024-11-18 12:06:11.107593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.107627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.517 [2024-11-18 12:06:11.107735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.107768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.517 [2024-11-18 12:06:11.107907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.107958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.517 [2024-11-18 12:06:11.108101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.517 [2024-11-18 12:06:11.108138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.517 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.108295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.108346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.108498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.108552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.108686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.108719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.108901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.108956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.109069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.109106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.109228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.109265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.109441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.109479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.109619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.109653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.109804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.109842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.109972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.110023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.110160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.110198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.110347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.110385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.110510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.110545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.110655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.110688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.110844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.110881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.111053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.111090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.111240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.111277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.111426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.111463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.111636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.111695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.111871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.111929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.112075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.112140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.112291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.112329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.112454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.112512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.112645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.112683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.112830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.112866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.113003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.113037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.113272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.113327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.113528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.113563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.113716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.113768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.113922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.113977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.114106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.114141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.114279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.114313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.114448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.114483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.114647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.114701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.114877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.114917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.115064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.115102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.115282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.115349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.518 [2024-11-18 12:06:11.115472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.518 [2024-11-18 12:06:11.115540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.518 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.115698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.519 [2024-11-18 12:06:11.115751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.519 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.115941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.519 [2024-11-18 12:06:11.115996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.519 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.116128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.519 [2024-11-18 12:06:11.116180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.519 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.116315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.519 [2024-11-18 12:06:11.116350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.519 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.116498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.519 [2024-11-18 12:06:11.116546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.519 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.116662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.519 [2024-11-18 12:06:11.116716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.519 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.116882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.519 [2024-11-18 12:06:11.116921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.519 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.117069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.519 [2024-11-18 12:06:11.117107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.519 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.117250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.519 [2024-11-18 12:06:11.117287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.519 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.117446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.519 [2024-11-18 12:06:11.117487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.519 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.117635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.519 [2024-11-18 12:06:11.117676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.519 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.117856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.519 [2024-11-18 12:06:11.117910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.519 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.118093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.519 [2024-11-18 12:06:11.118145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.519 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.118254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.519 [2024-11-18 12:06:11.118289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.519 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.118418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.519 [2024-11-18 12:06:11.118452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.519 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.118618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.519 [2024-11-18 12:06:11.118657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.519 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.118828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.519 [2024-11-18 12:06:11.118881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.519 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.119011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.519 [2024-11-18 12:06:11.119063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.519 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.119198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.519 [2024-11-18 12:06:11.119233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.519 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.119370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.519 [2024-11-18 12:06:11.119404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.519 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.119545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.519 [2024-11-18 12:06:11.119580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.519 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.119750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.519 [2024-11-18 12:06:11.119786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.519 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.119911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.519 [2024-11-18 12:06:11.119965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.519 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.120122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.519 [2024-11-18 12:06:11.120158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.519 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.120299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.519 [2024-11-18 12:06:11.120333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.519 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.120466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.519 [2024-11-18 12:06:11.120509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.519 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.120663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.519 [2024-11-18 12:06:11.120701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.519 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.120874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.519 [2024-11-18 12:06:11.120926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.519 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.121132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.519 [2024-11-18 12:06:11.121194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.519 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.121314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.519 [2024-11-18 12:06:11.121365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.519 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.121503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.519 [2024-11-18 12:06:11.121538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.519 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.121696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.519 [2024-11-18 12:06:11.121734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.519 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.121852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.519 [2024-11-18 12:06:11.121891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.519 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.122053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.519 [2024-11-18 12:06:11.122090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.519 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.122220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.519 [2024-11-18 12:06:11.122254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.519 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.122439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.519 [2024-11-18 12:06:11.122476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.519 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.122616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.519 [2024-11-18 12:06:11.122672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.519 qpair failed and we were unable to recover it. 00:37:45.519 [2024-11-18 12:06:11.122790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.520 [2024-11-18 12:06:11.122828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.520 qpair failed and we were unable to recover it. 00:37:45.520 [2024-11-18 12:06:11.123003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.520 [2024-11-18 12:06:11.123041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.520 qpair failed and we were unable to recover it. 00:37:45.520 [2024-11-18 12:06:11.123194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.520 [2024-11-18 12:06:11.123231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.520 qpair failed and we were unable to recover it. 00:37:45.520 [2024-11-18 12:06:11.123364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.520 [2024-11-18 12:06:11.123400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.520 qpair failed and we were unable to recover it. 00:37:45.520 [2024-11-18 12:06:11.123538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.520 [2024-11-18 12:06:11.123574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.520 qpair failed and we were unable to recover it. 00:37:45.520 [2024-11-18 12:06:11.123729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.520 [2024-11-18 12:06:11.123782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.520 qpair failed and we were unable to recover it. 00:37:45.520 [2024-11-18 12:06:11.123945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.520 [2024-11-18 12:06:11.123979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.520 qpair failed and we were unable to recover it. 00:37:45.520 [2024-11-18 12:06:11.124099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.520 [2024-11-18 12:06:11.124138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.520 qpair failed and we were unable to recover it. 00:37:45.520 [2024-11-18 12:06:11.124324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.520 [2024-11-18 12:06:11.124359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.520 qpair failed and we were unable to recover it. 00:37:45.520 [2024-11-18 12:06:11.124541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.520 [2024-11-18 12:06:11.124589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.520 qpair failed and we were unable to recover it. 00:37:45.520 [2024-11-18 12:06:11.124760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.520 [2024-11-18 12:06:11.124797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.520 qpair failed and we were unable to recover it. 00:37:45.520 [2024-11-18 12:06:11.125003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.520 [2024-11-18 12:06:11.125063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.520 qpair failed and we were unable to recover it. 00:37:45.520 [2024-11-18 12:06:11.125198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.520 [2024-11-18 12:06:11.125237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.520 qpair failed and we were unable to recover it. 00:37:45.520 [2024-11-18 12:06:11.125378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.520 [2024-11-18 12:06:11.125416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.520 qpair failed and we were unable to recover it. 00:37:45.520 [2024-11-18 12:06:11.125589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.520 [2024-11-18 12:06:11.125625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.520 qpair failed and we were unable to recover it. 00:37:45.520 [2024-11-18 12:06:11.125864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.520 [2024-11-18 12:06:11.125939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.520 qpair failed and we were unable to recover it. 00:37:45.520 [2024-11-18 12:06:11.126098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.520 [2024-11-18 12:06:11.126138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.520 qpair failed and we were unable to recover it. 00:37:45.520 [2024-11-18 12:06:11.126308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.520 [2024-11-18 12:06:11.126379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.520 qpair failed and we were unable to recover it. 00:37:45.520 [2024-11-18 12:06:11.126515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.520 [2024-11-18 12:06:11.126550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.520 qpair failed and we were unable to recover it. 00:37:45.520 [2024-11-18 12:06:11.126677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.520 [2024-11-18 12:06:11.126730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.520 qpair failed and we were unable to recover it. 00:37:45.520 [2024-11-18 12:06:11.126875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.520 [2024-11-18 12:06:11.126912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.520 qpair failed and we were unable to recover it. 00:37:45.520 [2024-11-18 12:06:11.127083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.520 [2024-11-18 12:06:11.127136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.520 qpair failed and we were unable to recover it. 00:37:45.520 [2024-11-18 12:06:11.127263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.520 [2024-11-18 12:06:11.127316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.520 qpair failed and we were unable to recover it. 00:37:45.520 [2024-11-18 12:06:11.127456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.520 [2024-11-18 12:06:11.127500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.520 qpair failed and we were unable to recover it. 00:37:45.520 [2024-11-18 12:06:11.127668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.520 [2024-11-18 12:06:11.127702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.520 qpair failed and we were unable to recover it. 00:37:45.520 [2024-11-18 12:06:11.127835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.520 [2024-11-18 12:06:11.127873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.520 qpair failed and we were unable to recover it. 00:37:45.520 [2024-11-18 12:06:11.128041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.520 [2024-11-18 12:06:11.128079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.520 qpair failed and we were unable to recover it. 00:37:45.520 [2024-11-18 12:06:11.128232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.520 [2024-11-18 12:06:11.128270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.520 qpair failed and we were unable to recover it. 00:37:45.520 [2024-11-18 12:06:11.128438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.520 [2024-11-18 12:06:11.128476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.520 qpair failed and we were unable to recover it. 00:37:45.520 [2024-11-18 12:06:11.128645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.520 [2024-11-18 12:06:11.128690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.520 qpair failed and we were unable to recover it. 00:37:45.520 [2024-11-18 12:06:11.128842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.520 [2024-11-18 12:06:11.128879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.520 qpair failed and we were unable to recover it. 00:37:45.520 [2024-11-18 12:06:11.129048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.520 [2024-11-18 12:06:11.129086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.520 qpair failed and we were unable to recover it. 00:37:45.520 [2024-11-18 12:06:11.129340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.520 [2024-11-18 12:06:11.129396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.520 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.129594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-18 12:06:11.129631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.129771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-18 12:06:11.129805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.129960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-18 12:06:11.129997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.130146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-18 12:06:11.130184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.130358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-18 12:06:11.130396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.130569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-18 12:06:11.130618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.130788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-18 12:06:11.130847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.130976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-18 12:06:11.131030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.131188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-18 12:06:11.131226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.131349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-18 12:06:11.131401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.131561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-18 12:06:11.131595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.131728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-18 12:06:11.131762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.131926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-18 12:06:11.131963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.132132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-18 12:06:11.132170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.132316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-18 12:06:11.132353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.132470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-18 12:06:11.132520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.132675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-18 12:06:11.132715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.132903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-18 12:06:11.132940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.133087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-18 12:06:11.133124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.133253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-18 12:06:11.133292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.133442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-18 12:06:11.133484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.133665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-18 12:06:11.133704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.133862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-18 12:06:11.133901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.134016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-18 12:06:11.134054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.134225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-18 12:06:11.134263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.134403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-18 12:06:11.134441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.134599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-18 12:06:11.134634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.134776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-18 12:06:11.134812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.134972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-18 12:06:11.135024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.135139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-18 12:06:11.135173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.135358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-18 12:06:11.135397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.135509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-18 12:06:11.135561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.135692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-18 12:06:11.135726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.135923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-18 12:06:11.135960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.136112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-18 12:06:11.136150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.136291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-18 12:06:11.136341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.136502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-18 12:06:11.136535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-18 12:06:11.136670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.136704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.136830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.136864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.137083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.137121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.137240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.137277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.137458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.137504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.137633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.137667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.137799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.137851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.138025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.138091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.138276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.138315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.138470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.138518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.138675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.138710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.138856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.138890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.139002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.139036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.139166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.139219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.139363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.139398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.139502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.139536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.139696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.139730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.139853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.139891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.140041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.140079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.140222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.140261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.140392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.140425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.140534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.140568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.140672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.140712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.140898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.140936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.141051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.141089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.141248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.141286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.141451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.141505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.141622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.141658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.141786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.141826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.141980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.142037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.142154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.142191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.142341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.142378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.142542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.142576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.142739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.142791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.142925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.142978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.143128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.143166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.143343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.143381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.143584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-18 12:06:11.143633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-18 12:06:11.143803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.143860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-18 12:06:11.144050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.144110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-18 12:06:11.144266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.144318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-18 12:06:11.144503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.144539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-18 12:06:11.144722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.144787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-18 12:06:11.144890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.144924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-18 12:06:11.145105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.145177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-18 12:06:11.145331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.145379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-18 12:06:11.145549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.145589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-18 12:06:11.145731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.145784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-18 12:06:11.145953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.145996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-18 12:06:11.146234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.146293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-18 12:06:11.146453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.146500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-18 12:06:11.146634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.146669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-18 12:06:11.146800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.146838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-18 12:06:11.147000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.147046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-18 12:06:11.147224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.147273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-18 12:06:11.147464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.147523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-18 12:06:11.147667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.147733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-18 12:06:11.147905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.147953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-18 12:06:11.148141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.148204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-18 12:06:11.148406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.148454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-18 12:06:11.148644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.148696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-18 12:06:11.148852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.148924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-18 12:06:11.149073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.149131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-18 12:06:11.149326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.149389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-18 12:06:11.149523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.149562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-18 12:06:11.149730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.149788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-18 12:06:11.149975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.150027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-18 12:06:11.150135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.150171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-18 12:06:11.150280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.150316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-18 12:06:11.150430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.150473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-18 12:06:11.150602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.150637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-18 12:06:11.150749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.150810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-18 12:06:11.151065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.151104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-18 12:06:11.151237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.151271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-18 12:06:11.151452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.151487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-18 12:06:11.151637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-18 12:06:11.151672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-18 12:06:11.151824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-18 12:06:11.151879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-18 12:06:11.152011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-18 12:06:11.152051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-18 12:06:11.152222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-18 12:06:11.152261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-18 12:06:11.152392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-18 12:06:11.152430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-18 12:06:11.152566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-18 12:06:11.152601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-18 12:06:11.152747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-18 12:06:11.152814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-18 12:06:11.152990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-18 12:06:11.153029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-18 12:06:11.153175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-18 12:06:11.153225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-18 12:06:11.153337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-18 12:06:11.153386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-18 12:06:11.153561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-18 12:06:11.153599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-18 12:06:11.153724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-18 12:06:11.153762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-18 12:06:11.153906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-18 12:06:11.153946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-18 12:06:11.154165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-18 12:06:11.154204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-18 12:06:11.154330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-18 12:06:11.154382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-18 12:06:11.154499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-18 12:06:11.154534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-18 12:06:11.154689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-18 12:06:11.154729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-18 12:06:11.154879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-18 12:06:11.154917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-18 12:06:11.155057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-18 12:06:11.155094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-18 12:06:11.155239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-18 12:06:11.155278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-18 12:06:11.155445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-18 12:06:11.155481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-18 12:06:11.155625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-18 12:06:11.155677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-18 12:06:11.155866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-18 12:06:11.155905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-18 12:06:11.156031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-18 12:06:11.156068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-18 12:06:11.156242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-18 12:06:11.156280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-18 12:06:11.156424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-18 12:06:11.156463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-18 12:06:11.156657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-18 12:06:11.156692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-18 12:06:11.156845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-18 12:06:11.156889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-18 12:06:11.157072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-18 12:06:11.157109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-18 12:06:11.157288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-18 12:06:11.157326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-18 12:06:11.157475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-18 12:06:11.157521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-18 12:06:11.157658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-18 12:06:11.157693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-18 12:06:11.157880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-18 12:06:11.157913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-18 12:06:11.158174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-18 12:06:11.158211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-18 12:06:11.158362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-18 12:06:11.158400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-18 12:06:11.158557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-18 12:06:11.158592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-18 12:06:11.158754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-18 12:06:11.158788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-18 12:06:11.158919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-18 12:06:11.158958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-18 12:06:11.159104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-18 12:06:11.159142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-18 12:06:11.159339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-18 12:06:11.159377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-18 12:06:11.159484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-18 12:06:11.159543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-18 12:06:11.159660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-18 12:06:11.159695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-18 12:06:11.159829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-18 12:06:11.159863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-18 12:06:11.160041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-18 12:06:11.160080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-18 12:06:11.160228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-18 12:06:11.160267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-18 12:06:11.160448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-18 12:06:11.160481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-18 12:06:11.160632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-18 12:06:11.160667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-18 12:06:11.160783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-18 12:06:11.160817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-18 12:06:11.160975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-18 12:06:11.161012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-18 12:06:11.161154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-18 12:06:11.161191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-18 12:06:11.161307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-18 12:06:11.161345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-18 12:06:11.161538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-18 12:06:11.161586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-18 12:06:11.161728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-18 12:06:11.161781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-18 12:06:11.161926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-18 12:06:11.161963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-18 12:06:11.162088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-18 12:06:11.162126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-18 12:06:11.162267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-18 12:06:11.162306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-18 12:06:11.162445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-18 12:06:11.162484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-18 12:06:11.162651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-18 12:06:11.162685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-18 12:06:11.162813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-18 12:06:11.162851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-18 12:06:11.163024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-18 12:06:11.163062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-18 12:06:11.163190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-18 12:06:11.163230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-18 12:06:11.163393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-18 12:06:11.163433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-18 12:06:11.163593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-18 12:06:11.163629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-18 12:06:11.163761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-18 12:06:11.163816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-18 12:06:11.164002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-18 12:06:11.164055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-18 12:06:11.164204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-18 12:06:11.164259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-18 12:06:11.164389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-18 12:06:11.164424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-18 12:06:11.164589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-18 12:06:11.164644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-18 12:06:11.164811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-18 12:06:11.164851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-18 12:06:11.165057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-18 12:06:11.165096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-18 12:06:11.165268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-18 12:06:11.165306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-18 12:06:11.165500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-18 12:06:11.165555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.165692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-18 12:06:11.165740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.165883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-18 12:06:11.165923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.166103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-18 12:06:11.166141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.166256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-18 12:06:11.166293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.166455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-18 12:06:11.166495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.166615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-18 12:06:11.166649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.166776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-18 12:06:11.166810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.166964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-18 12:06:11.167001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.167137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-18 12:06:11.167171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.167348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-18 12:06:11.167388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.167543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-18 12:06:11.167578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.167716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-18 12:06:11.167750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.167877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-18 12:06:11.167916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.168122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-18 12:06:11.168160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.168339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-18 12:06:11.168378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.168504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-18 12:06:11.168558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.168676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-18 12:06:11.168711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.168845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-18 12:06:11.168897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.169049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-18 12:06:11.169100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.169285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-18 12:06:11.169322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.169469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-18 12:06:11.169534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.169649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-18 12:06:11.169684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.169835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-18 12:06:11.169874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.170050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-18 12:06:11.170088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.170202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-18 12:06:11.170240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.170419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-18 12:06:11.170467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.170600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-18 12:06:11.170637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.170795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-18 12:06:11.170835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.171009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-18 12:06:11.171046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.171195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-18 12:06:11.171233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.171414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-18 12:06:11.171448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.171592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-18 12:06:11.171628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.171747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-18 12:06:11.171801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.171915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-18 12:06:11.171965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.172140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-18 12:06:11.172177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.172332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-18 12:06:11.172375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.172560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-18 12:06:11.172595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-18 12:06:11.172725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.172778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.172905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.172942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.173075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.173129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.173240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.173278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.173425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.173476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.173618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.173652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.173808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.173846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.173990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.174028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.174198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.174236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.174381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.174420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.174623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.174658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.174765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.174799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.174941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.174994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.175153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.175191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.175385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.175422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.175555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.175589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.175742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.175795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.176002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.176040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.176184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.176221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.176378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.176416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.176548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.176582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.176720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.176754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.176927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.176979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.177190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.177228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.177375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.177413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.177591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.177626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.177784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.177818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.177968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.178006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.178160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.178198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.178366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.178405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.178578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.178614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.178725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.178759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.178948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.178983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.179133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.179171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.179324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.179363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.179511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.179546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.179659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.179693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.179867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-18 12:06:11.179916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-18 12:06:11.180121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.180164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-18 12:06:11.180313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.180351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-18 12:06:11.180530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.180581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-18 12:06:11.180739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.180787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-18 12:06:11.180955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.181009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-18 12:06:11.181177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.181231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-18 12:06:11.181373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.181409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-18 12:06:11.181544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.181579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-18 12:06:11.181708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.181766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-18 12:06:11.181903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.181938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-18 12:06:11.182088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.182142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-18 12:06:11.182249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.182284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-18 12:06:11.182437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.182486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-18 12:06:11.182666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.182720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-18 12:06:11.182865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.182903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-18 12:06:11.183065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.183104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-18 12:06:11.183248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.183286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-18 12:06:11.183446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.183504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-18 12:06:11.183673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.183713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-18 12:06:11.183911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.183968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-18 12:06:11.184100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.184135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-18 12:06:11.184244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.184278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-18 12:06:11.184430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.184478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-18 12:06:11.184651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.184688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-18 12:06:11.184827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.184861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-18 12:06:11.184991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.185025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-18 12:06:11.185160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.185195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-18 12:06:11.185336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.185385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-18 12:06:11.185527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.185564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-18 12:06:11.185677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.185712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-18 12:06:11.185893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.185946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-18 12:06:11.186090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.186142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-18 12:06:11.186292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.186326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-18 12:06:11.186435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.186471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-18 12:06:11.186617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.186653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-18 12:06:11.186815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.186854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-18 12:06:11.186979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.187030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-18 12:06:11.187176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-18 12:06:11.187213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-18 12:06:11.187358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-18 12:06:11.187396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-18 12:06:11.187526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-18 12:06:11.187562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-18 12:06:11.187695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-18 12:06:11.187735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-18 12:06:11.187892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-18 12:06:11.187930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-18 12:06:11.188051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-18 12:06:11.188101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-18 12:06:11.188248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-18 12:06:11.188288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-18 12:06:11.188442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-18 12:06:11.188500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-18 12:06:11.188640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-18 12:06:11.188674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-18 12:06:11.188826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-18 12:06:11.188864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-18 12:06:11.188982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-18 12:06:11.189019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-18 12:06:11.189136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-18 12:06:11.189174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-18 12:06:11.189324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-18 12:06:11.189363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-18 12:06:11.189497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-18 12:06:11.189533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-18 12:06:11.189641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-18 12:06:11.189675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-18 12:06:11.189798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-18 12:06:11.189850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-18 12:06:11.190052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-18 12:06:11.190090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-18 12:06:11.190270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-18 12:06:11.190307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-18 12:06:11.190457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-18 12:06:11.190503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-18 12:06:11.190662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-18 12:06:11.190696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-18 12:06:11.190900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-18 12:06:11.190938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-18 12:06:11.191115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-18 12:06:11.191153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-18 12:06:11.191264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-18 12:06:11.191302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-18 12:06:11.191477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-18 12:06:11.191532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-18 12:06:11.191687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-18 12:06:11.191756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-18 12:06:11.191917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-18 12:06:11.191957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-18 12:06:11.192107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-18 12:06:11.192145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-18 12:06:11.192317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-18 12:06:11.192354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-18 12:06:11.192483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-18 12:06:11.192543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-18 12:06:11.192682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-18 12:06:11.192718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-18 12:06:11.192914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-18 12:06:11.192974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-18 12:06:11.193135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-18 12:06:11.193191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-18 12:06:11.193335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-18 12:06:11.193370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-18 12:06:11.193508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-18 12:06:11.193543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.193678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.193712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.193861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.193896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.194034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.194068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.194201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.194235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.194395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.194429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.194595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.194642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.194832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.194874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.195038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.195077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.195225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.195262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.195385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.195430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.195618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.195668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.195810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.195866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.196019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.196072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.196231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.196288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.196440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.196488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.196646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.196687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.196816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.196855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.197111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.197169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.197319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.197358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.197519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.197553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.197711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.197757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.197912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.197950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.198128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.198166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.198338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.198375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.198523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.198558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.198720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.198755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.198861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.198896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.199011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.199045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.199200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.199249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.199415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.199452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.199629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.199665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.199820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.199856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.200008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.200060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.200167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.200202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.200362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.200396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.200540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.200578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.200761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.200814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-18 12:06:11.200966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-18 12:06:11.201019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.201182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-18 12:06:11.201217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.201375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-18 12:06:11.201410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.201581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-18 12:06:11.201618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.201767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-18 12:06:11.201815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.201934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-18 12:06:11.201970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.202104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-18 12:06:11.202138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.202275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-18 12:06:11.202310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.202412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-18 12:06:11.202446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.202648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-18 12:06:11.202686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.202868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-18 12:06:11.202906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.203049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-18 12:06:11.203087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.203280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-18 12:06:11.203350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.203462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-18 12:06:11.203503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.203698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-18 12:06:11.203751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.203932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-18 12:06:11.203985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.204147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-18 12:06:11.204200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.204316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-18 12:06:11.204351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.204488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-18 12:06:11.204532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.204706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-18 12:06:11.204759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.204919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-18 12:06:11.204960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.205123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-18 12:06:11.205176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.205348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-18 12:06:11.205383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.205526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-18 12:06:11.205561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.205693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-18 12:06:11.205727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.205833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-18 12:06:11.205867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.205993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-18 12:06:11.206029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.206163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-18 12:06:11.206198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.206300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-18 12:06:11.206335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.206481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-18 12:06:11.206523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.206667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-18 12:06:11.206703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.206882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-18 12:06:11.206929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.207073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-18 12:06:11.207109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.207251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-18 12:06:11.207286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.207442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-18 12:06:11.207476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.207623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-18 12:06:11.207679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.207853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-18 12:06:11.207890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.208037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-18 12:06:11.208074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-18 12:06:11.208217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.208255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-18 12:06:11.208414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.208450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-18 12:06:11.208587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.208635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-18 12:06:11.208784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.208822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-18 12:06:11.209015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.209053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-18 12:06:11.209225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.209262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-18 12:06:11.209429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.209463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-18 12:06:11.209578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.209612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-18 12:06:11.209738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.209777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-18 12:06:11.209925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.209964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-18 12:06:11.210115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.210153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-18 12:06:11.210299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.210336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-18 12:06:11.210443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.210502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-18 12:06:11.210638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.210673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-18 12:06:11.210792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.210838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-18 12:06:11.211016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.211054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-18 12:06:11.211234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.211275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-18 12:06:11.211414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.211463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-18 12:06:11.211621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.211659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-18 12:06:11.211848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.211911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-18 12:06:11.212025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.212061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-18 12:06:11.212245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.212280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-18 12:06:11.212411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.212455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-18 12:06:11.212612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.212661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-18 12:06:11.212862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.212902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-18 12:06:11.213099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.213138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-18 12:06:11.213355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.213412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-18 12:06:11.213608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.213644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-18 12:06:11.213889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.213944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-18 12:06:11.214225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.214283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-18 12:06:11.214431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.214469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-18 12:06:11.214635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.214683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-18 12:06:11.214843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.214897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-18 12:06:11.215094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.215136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-18 12:06:11.215256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.215295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-18 12:06:11.215430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.215464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-18 12:06:11.215623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.215657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-18 12:06:11.215812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-18 12:06:11.215865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.216023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.216062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.216284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.216351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.216507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.216546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.216704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.216742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.216943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.216996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.217178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.217230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.217332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.217367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.217475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.217517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.217679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.217719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.217863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.217901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.218050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.218088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.218234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.218272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.218420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.218460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.218629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.218664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.218783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.218821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.218930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.218967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.219109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.219153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.219339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.219393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.219579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.219629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.219772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.219828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.220003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.220041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.220214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.220252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.220396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.220434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.220625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.220661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.220824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.220858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.221011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.221049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.221195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.221233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.221408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.221447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.221599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.221634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.221783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.221818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.221965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.222054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.222239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.222277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.222388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.222427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.222609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.222658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.222804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.222841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.223003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.223043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.223220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.223257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-18 12:06:11.223399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-18 12:06:11.223436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-18 12:06:11.223577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-18 12:06:11.223612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-18 12:06:11.223790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-18 12:06:11.223829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-18 12:06:11.224034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-18 12:06:11.224101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-18 12:06:11.224219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-18 12:06:11.224257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-18 12:06:11.224406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-18 12:06:11.224458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-18 12:06:11.224577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-18 12:06:11.224611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-18 12:06:11.224753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-18 12:06:11.224790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-18 12:06:11.225011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-18 12:06:11.225050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-18 12:06:11.225308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-18 12:06:11.225364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-18 12:06:11.225554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-18 12:06:11.225589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-18 12:06:11.225704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-18 12:06:11.225737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-18 12:06:11.225871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-18 12:06:11.225924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-18 12:06:11.226053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-18 12:06:11.226091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-18 12:06:11.226318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-18 12:06:11.226358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-18 12:06:11.226481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-18 12:06:11.226547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-18 12:06:11.226658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-18 12:06:11.226693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-18 12:06:11.226847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-18 12:06:11.226885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-18 12:06:11.227003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-18 12:06:11.227041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-18 12:06:11.227191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-18 12:06:11.227237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-18 12:06:11.227415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-18 12:06:11.227463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-18 12:06:11.227626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-18 12:06:11.227665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-18 12:06:11.227819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-18 12:06:11.227871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-18 12:06:11.228054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-18 12:06:11.228120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-18 12:06:11.228374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-18 12:06:11.228430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-18 12:06:11.228545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-18 12:06:11.228580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-18 12:06:11.228688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-18 12:06:11.228722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-18 12:06:11.228939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-18 12:06:11.228996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-18 12:06:11.229113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-18 12:06:11.229151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-18 12:06:11.229337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-18 12:06:11.229377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-18 12:06:11.229517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-18 12:06:11.229568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-18 12:06:11.229694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-18 12:06:11.229733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-18 12:06:11.229925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-18 12:06:11.229984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-18 12:06:11.230219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-18 12:06:11.230277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.230436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-18 12:06:11.230508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.230692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-18 12:06:11.230727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.230900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-18 12:06:11.230961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.231140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-18 12:06:11.231200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.231406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-18 12:06:11.231444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.231653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-18 12:06:11.231688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.231823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-18 12:06:11.231860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.232054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-18 12:06:11.232091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.232232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-18 12:06:11.232283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.232457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-18 12:06:11.232502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.232622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-18 12:06:11.232656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.232799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-18 12:06:11.232833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.232972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-18 12:06:11.233009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.233230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-18 12:06:11.233269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.233394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-18 12:06:11.233447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.233566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-18 12:06:11.233600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.233736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-18 12:06:11.233769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.233875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-18 12:06:11.233929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.234082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-18 12:06:11.234119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.234235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-18 12:06:11.234285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.234433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-18 12:06:11.234471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.234663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-18 12:06:11.234711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.234852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-18 12:06:11.234888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.235045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-18 12:06:11.235084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.235204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-18 12:06:11.235242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.235396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-18 12:06:11.235430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.235551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-18 12:06:11.235586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.235723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-18 12:06:11.235759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.235901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-18 12:06:11.235968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.236162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-18 12:06:11.236218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.236364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-18 12:06:11.236399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.236507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-18 12:06:11.236542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.236691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-18 12:06:11.236746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.236886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-18 12:06:11.236921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.237055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-18 12:06:11.237090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.237253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-18 12:06:11.237287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.237451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-18 12:06:11.237484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-18 12:06:11.237601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.237635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.237767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.237821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.237962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.238017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.238185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.238239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.238401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.238441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.238620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.238656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.238819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.238857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.238983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.239021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.239172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.239210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.239402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.239436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.239585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.239620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.239769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.239804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.239959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.239998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.240121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.240159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.240297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.240347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.240443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.240481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.240592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.240626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.240778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.240814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.240963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.241003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.241132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.241171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.241365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.241413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.241572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.241611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.241752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.241806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.241918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.241953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.242108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.242148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.242301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.242338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.242508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.242574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.242682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.242718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.242921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.242974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.243109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.243162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.243275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.243310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.243445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.243479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.243605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.243640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.243774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.243809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.243973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.244006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.244137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.244175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.244377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.244431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.244580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-18 12:06:11.244617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-18 12:06:11.244807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.244864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-18 12:06:11.245017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.245070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-18 12:06:11.245226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.245278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-18 12:06:11.245414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.245448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-18 12:06:11.245578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.245614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-18 12:06:11.245721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.245756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-18 12:06:11.245903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.245955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-18 12:06:11.246125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.246164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-18 12:06:11.246345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.246383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-18 12:06:11.246559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.246612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-18 12:06:11.246766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.246822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-18 12:06:11.247010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.247064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-18 12:06:11.247219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.247257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-18 12:06:11.247377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.247410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-18 12:06:11.247531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.247566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-18 12:06:11.247727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.247766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-18 12:06:11.247941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.247978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-18 12:06:11.248129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.248173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-18 12:06:11.248331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.248365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-18 12:06:11.248466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.248505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-18 12:06:11.248647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.248682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-18 12:06:11.248843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.248899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-18 12:06:11.249030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.249095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-18 12:06:11.249211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.249247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-18 12:06:11.249362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.249397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-18 12:06:11.249587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.249641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-18 12:06:11.249783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.249823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-18 12:06:11.249964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.250003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-18 12:06:11.250192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.250231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-18 12:06:11.250388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.250427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-18 12:06:11.250595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.250631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-18 12:06:11.250793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.250831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-18 12:06:11.251013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.251051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-18 12:06:11.251190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.251226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-18 12:06:11.251391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.251427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-18 12:06:11.251596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.251630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-18 12:06:11.251784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.251837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-18 12:06:11.252017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-18 12:06:11.252068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-18 12:06:11.252212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-18 12:06:11.252246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-18 12:06:11.252370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-18 12:06:11.252418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-18 12:06:11.252564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-18 12:06:11.252599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-18 12:06:11.252739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-18 12:06:11.252792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-18 12:06:11.252960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-18 12:06:11.252997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-18 12:06:11.253169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-18 12:06:11.253207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-18 12:06:11.253338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-18 12:06:11.253375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-18 12:06:11.253511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-18 12:06:11.253546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-18 12:06:11.253654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-18 12:06:11.253689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-18 12:06:11.253830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-18 12:06:11.253882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-18 12:06:11.254106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-18 12:06:11.254144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-18 12:06:11.254259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-18 12:06:11.254297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-18 12:06:11.254464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-18 12:06:11.254517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-18 12:06:11.254658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-18 12:06:11.254692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-18 12:06:11.254799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-18 12:06:11.254834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-18 12:06:11.255023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-18 12:06:11.255061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-18 12:06:11.255178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-18 12:06:11.255230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-18 12:06:11.255367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-18 12:06:11.255401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-18 12:06:11.255539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-18 12:06:11.255575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-18 12:06:11.255691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-18 12:06:11.255729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-18 12:06:11.255871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-18 12:06:11.255923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-18 12:06:11.256071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-18 12:06:11.256109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-18 12:06:11.256281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-18 12:06:11.256318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-18 12:06:11.256476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-18 12:06:11.256548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-18 12:06:11.256723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-18 12:06:11.256760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-18 12:06:11.257021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-18 12:06:11.257059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-18 12:06:11.257234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-18 12:06:11.257272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-18 12:06:11.257422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-18 12:06:11.257460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-18 12:06:11.257666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-18 12:06:11.257714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-18 12:06:11.257903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-18 12:06:11.257956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-18 12:06:11.258094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-18 12:06:11.258149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-18 12:06:11.258292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-18 12:06:11.258327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-18 12:06:11.258500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-18 12:06:11.258536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.258653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-18 12:06:11.258715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.258859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-18 12:06:11.258896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.259018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-18 12:06:11.259056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.259207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-18 12:06:11.259245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.259434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-18 12:06:11.259467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.259638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-18 12:06:11.259674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.259793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-18 12:06:11.259828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.259982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-18 12:06:11.260038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.260174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-18 12:06:11.260214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.260346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-18 12:06:11.260381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.260536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-18 12:06:11.260585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.260742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-18 12:06:11.260790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.260937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-18 12:06:11.260973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.261165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-18 12:06:11.261204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.261349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-18 12:06:11.261387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.261567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-18 12:06:11.261603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.261754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-18 12:06:11.261792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.261904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-18 12:06:11.261942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.262131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-18 12:06:11.262169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.262326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-18 12:06:11.262362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.262526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-18 12:06:11.262562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.262681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-18 12:06:11.262716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.262919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-18 12:06:11.262971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.263122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-18 12:06:11.263175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.263348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-18 12:06:11.263390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.263583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-18 12:06:11.263619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.263749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-18 12:06:11.263793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.263972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-18 12:06:11.264011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.264150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-18 12:06:11.264188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.264322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-18 12:06:11.264356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.264472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-18 12:06:11.264513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.264672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-18 12:06:11.264736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.264919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-18 12:06:11.264971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.265085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-18 12:06:11.265119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.265271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-18 12:06:11.265318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.265471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-18 12:06:11.265518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.265655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-18 12:06:11.265689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-18 12:06:11.265820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.265853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.265961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.265996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.266137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.266172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.266302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.266356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.266511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.266548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.266708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.266760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.266940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.266993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.267115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.267153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.267321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.267356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.267504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.267561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.267703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.267741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.267923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.267962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.268133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.268171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.268360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.268398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.268559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.268594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.268757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.268794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.268949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.268987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.269144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.269182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.269337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.269373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.269528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.269577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.269725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.269780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.269954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.269992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.270172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.270210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.270380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.270427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.270575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.270611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.270771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.270809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.270993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.271031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.271142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.271180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.271357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.271413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.271558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.271598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.271774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.271838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.272011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.272051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.272239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.272278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.272438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.272474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.272589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.272625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.272767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.272802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.272991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.273028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.273168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-18 12:06:11.273206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-18 12:06:11.273357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.273395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-18 12:06:11.273556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.273605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-18 12:06:11.273772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.273811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-18 12:06:11.273974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.274013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-18 12:06:11.274154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.274192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-18 12:06:11.274314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.274351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-18 12:06:11.274547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.274582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-18 12:06:11.274746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.274796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-18 12:06:11.274965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.275002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-18 12:06:11.275177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.275215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-18 12:06:11.275383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.275419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-18 12:06:11.275557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.275591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-18 12:06:11.275694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.275729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-18 12:06:11.275915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.275968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-18 12:06:11.276108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.276160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-18 12:06:11.276307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.276345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-18 12:06:11.276461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.276506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-18 12:06:11.276632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.276666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-18 12:06:11.276881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.276933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-18 12:06:11.277093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.277130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-18 12:06:11.277252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.277305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-18 12:06:11.277436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.277472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-18 12:06:11.277665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.277699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-18 12:06:11.277854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.277891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-18 12:06:11.278070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.278107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-18 12:06:11.278244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.278281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-18 12:06:11.278451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.278509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-18 12:06:11.278688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.278726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-18 12:06:11.278879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.278932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-18 12:06:11.279127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.279180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-18 12:06:11.279321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.279357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-18 12:06:11.279519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.279560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-18 12:06:11.279690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.279725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-18 12:06:11.279834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.279868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-18 12:06:11.280006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.280041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-18 12:06:11.280194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.280231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-18 12:06:11.280375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.280413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-18 12:06:11.280565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-18 12:06:11.280601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.280758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.280812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.281004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.281056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.281230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.281283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.281437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.281475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.281611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.281646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.281806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.281844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.281987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.282024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.282203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.282241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.282429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.282465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.282588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.282636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.282787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.282823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.283002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.283036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.283193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.283231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.283382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.283420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.283567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.283602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.283748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.283787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.283946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.283986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.284133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.284172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.284343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.284381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.284509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.284562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.284700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.284736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.284887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.284942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.285061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.285098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.285216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.285254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.285370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.285408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.285545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.285579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.285727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.285762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.285924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.285962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.286110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.286147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.286295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.286334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.286508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.286566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.286694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.286732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.286887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.286941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.287092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.287150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.287326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.287362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.287504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.287540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.287646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.287680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-18 12:06:11.287784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-18 12:06:11.287837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-18 12:06:11.287982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-18 12:06:11.288019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-18 12:06:11.288193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-18 12:06:11.288231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-18 12:06:11.288356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-18 12:06:11.288394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-18 12:06:11.288559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-18 12:06:11.288596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-18 12:06:11.288744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-18 12:06:11.288798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-18 12:06:11.288962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-18 12:06:11.289002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-18 12:06:11.289113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-18 12:06:11.289151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-18 12:06:11.289362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-18 12:06:11.289415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-18 12:06:11.289553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-18 12:06:11.289589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-18 12:06:11.289774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-18 12:06:11.289827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-18 12:06:11.289984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-18 12:06:11.290037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-18 12:06:11.290228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-18 12:06:11.290280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-18 12:06:11.290386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-18 12:06:11.290421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-18 12:06:11.290554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-18 12:06:11.290609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-18 12:06:11.290724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-18 12:06:11.290762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-18 12:06:11.290939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-18 12:06:11.290977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-18 12:06:11.291114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-18 12:06:11.291152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-18 12:06:11.291293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-18 12:06:11.291344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-18 12:06:11.291448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-18 12:06:11.291481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-18 12:06:11.291625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-18 12:06:11.291660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-18 12:06:11.291781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-18 12:06:11.291819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-18 12:06:11.291935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-18 12:06:11.291972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-18 12:06:11.292124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-18 12:06:11.292164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-18 12:06:11.292309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-18 12:06:11.292347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-18 12:06:11.292509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-18 12:06:11.292546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-18 12:06:11.292727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-18 12:06:11.292783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-18 12:06:11.292942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-18 12:06:11.292998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-18 12:06:11.293154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-18 12:06:11.293206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-18 12:06:11.293365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-18 12:06:11.293399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-18 12:06:11.293554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-18 12:06:11.293607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-18 12:06:11.293740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-18 12:06:11.293775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-18 12:06:11.293914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-18 12:06:11.293950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-18 12:06:11.294062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-18 12:06:11.294096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.294246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-18 12:06:11.294293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.294412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-18 12:06:11.294448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.294622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-18 12:06:11.294663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.294821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-18 12:06:11.294859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.295001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-18 12:06:11.295038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.295233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-18 12:06:11.295272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.295415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-18 12:06:11.295454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.295622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-18 12:06:11.295656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.295807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-18 12:06:11.295845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.295994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-18 12:06:11.296032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.296173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-18 12:06:11.296210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.296360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-18 12:06:11.296398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.296594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-18 12:06:11.296630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.296814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-18 12:06:11.296878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.297046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-18 12:06:11.297111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.297228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-18 12:06:11.297267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.297429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-18 12:06:11.297464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.297632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-18 12:06:11.297687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.297842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-18 12:06:11.297894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.298042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-18 12:06:11.298097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.298241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-18 12:06:11.298276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.298436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-18 12:06:11.298470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.298652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-18 12:06:11.298717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.298875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-18 12:06:11.298916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.299054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-18 12:06:11.299089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.299260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-18 12:06:11.299294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.299437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-18 12:06:11.299471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.299604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-18 12:06:11.299653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.299818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-18 12:06:11.299873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.300065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-18 12:06:11.300117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.300229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-18 12:06:11.300263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.300376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-18 12:06:11.300410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.300546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-18 12:06:11.300585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.300776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-18 12:06:11.300829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.301014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-18 12:06:11.301054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.301210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-18 12:06:11.301249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.301431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-18 12:06:11.301466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-18 12:06:11.301598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.301646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-18 12:06:11.301800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.301845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-18 12:06:11.301975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.302014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-18 12:06:11.302190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.302228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-18 12:06:11.302388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.302422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-18 12:06:11.302526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.302566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-18 12:06:11.302704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.302739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-18 12:06:11.302903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.302941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-18 12:06:11.303139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.303177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-18 12:06:11.303318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.303356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-18 12:06:11.303532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.303567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-18 12:06:11.303730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.303763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-18 12:06:11.303867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.303920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-18 12:06:11.304097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.304135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-18 12:06:11.304273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.304310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-18 12:06:11.304507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.304572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-18 12:06:11.304682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.304718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-18 12:06:11.304872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.304911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-18 12:06:11.305051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.305088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-18 12:06:11.305274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.305312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-18 12:06:11.305476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.305533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-18 12:06:11.305702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.305738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-18 12:06:11.305896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.305935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-18 12:06:11.306056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.306093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-18 12:06:11.306229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.306282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-18 12:06:11.306427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.306465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-18 12:06:11.306624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.306672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-18 12:06:11.306842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.306882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-18 12:06:11.307034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.307072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-18 12:06:11.307217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.307255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-18 12:06:11.307455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.307514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-18 12:06:11.307646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.307683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-18 12:06:11.307789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.307843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-18 12:06:11.307998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.308037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-18 12:06:11.308190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.308228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-18 12:06:11.308416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.308454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-18 12:06:11.308626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.308675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-18 12:06:11.308861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-18 12:06:11.308902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.309072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.309125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.309301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.309340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.309523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.309575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.309707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.309741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.309920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.309958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.310084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.310122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.310356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.310405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.310573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.310615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.310729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.310782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.310976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.311013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.311185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.311223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.311368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.311405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.311571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.311606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.311764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.311821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.311982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.312037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.312193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.312245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.312385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.312419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.312605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.312663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.312820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.312875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.313012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.313048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.313171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.313206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.313389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.313437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.313566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.313601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.313757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.313810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.313955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.314007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.314189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.314241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.314379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.314414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.314581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.314616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.314769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.314817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.314994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.315030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.315200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.315235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.315368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.315409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.315548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.315583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.315740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.315778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.315928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.315967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.316120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.316157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.316322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.316358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.316462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.316510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.316631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-18 12:06:11.316667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-18 12:06:11.316853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-18 12:06:11.316905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-18 12:06:11.317050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-18 12:06:11.317100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-18 12:06:11.317216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-18 12:06:11.317252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-18 12:06:11.317427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-18 12:06:11.317462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-18 12:06:11.317576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-18 12:06:11.317611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-18 12:06:11.317721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-18 12:06:11.317757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-18 12:06:11.317898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-18 12:06:11.317932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-18 12:06:11.318063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-18 12:06:11.318098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-18 12:06:11.318238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-18 12:06:11.318278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-18 12:06:11.318405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-18 12:06:11.318440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-18 12:06:11.318609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-18 12:06:11.318644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-18 12:06:11.318806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-18 12:06:11.318841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-18 12:06:11.318989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-18 12:06:11.319042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-18 12:06:11.319178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-18 12:06:11.319214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-18 12:06:11.319360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-18 12:06:11.319394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-18 12:06:11.319550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-18 12:06:11.319589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-18 12:06:11.319708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-18 12:06:11.319745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-18 12:06:11.319899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-18 12:06:11.319937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-18 12:06:11.320076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-18 12:06:11.320130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-18 12:06:11.320290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-18 12:06:11.320325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-18 12:06:11.320437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-18 12:06:11.320471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-18 12:06:11.320635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-18 12:06:11.320674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-18 12:06:11.320807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-18 12:06:11.320845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-18 12:06:11.320972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-18 12:06:11.321010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-18 12:06:11.321212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-18 12:06:11.321268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-18 12:06:11.321379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-18 12:06:11.321413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-18 12:06:11.321591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-18 12:06:11.321643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-18 12:06:11.321822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-18 12:06:11.321874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-18 12:06:11.322032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-18 12:06:11.322084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-18 12:06:11.322220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-18 12:06:11.322255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-18 12:06:11.322420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-18 12:06:11.322455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-18 12:06:11.322603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-18 12:06:11.322652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-18 12:06:11.322800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-18 12:06:11.322837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-18 12:06:11.323000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-18 12:06:11.323035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-18 12:06:11.323180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-18 12:06:11.323214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-18 12:06:11.323357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-18 12:06:11.323392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-18 12:06:11.323530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-18 12:06:11.323566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-18 12:06:11.323747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-18 12:06:11.323801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-18 12:06:11.323965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-18 12:06:11.324005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-18 12:06:11.324145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-18 12:06:11.324183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-18 12:06:11.324327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-18 12:06:11.324365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-18 12:06:11.324525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-18 12:06:11.324577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-18 12:06:11.324698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-18 12:06:11.324735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-18 12:06:11.324929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-18 12:06:11.324967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-18 12:06:11.325136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-18 12:06:11.325170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-18 12:06:11.325307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-18 12:06:11.325344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-18 12:06:11.325503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-18 12:06:11.325538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-18 12:06:11.325653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-18 12:06:11.325687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-18 12:06:11.325819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-18 12:06:11.325860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-18 12:06:11.326048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-18 12:06:11.326086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-18 12:06:11.326271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-18 12:06:11.326309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-18 12:06:11.326458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-18 12:06:11.326504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-18 12:06:11.326641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-18 12:06:11.326689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-18 12:06:11.326892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-18 12:06:11.326948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-18 12:06:11.327082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-18 12:06:11.327116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-18 12:06:11.327250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-18 12:06:11.327285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-18 12:06:11.327425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-18 12:06:11.327459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-18 12:06:11.327630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-18 12:06:11.327679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-18 12:06:11.327804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-18 12:06:11.327840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-18 12:06:11.328008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-18 12:06:11.328043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-18 12:06:11.328199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-18 12:06:11.328236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-18 12:06:11.328396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-18 12:06:11.328430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-18 12:06:11.328548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-18 12:06:11.328583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-18 12:06:11.328735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-18 12:06:11.328772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-18 12:06:11.328915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-18 12:06:11.328952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-18 12:06:11.329095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-18 12:06:11.329133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-18 12:06:11.329265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-18 12:06:11.329301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-18 12:06:11.329441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-18 12:06:11.329480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-18 12:06:11.329675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-18 12:06:11.329714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-18 12:06:11.329835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-18 12:06:11.329873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-18 12:06:11.329982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-18 12:06:11.330019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-18 12:06:11.330173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-18 12:06:11.330212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.330396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.330434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.330565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.330599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.330747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.330798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.330948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.330987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.331185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.331223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.331363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.331397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.331555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.331590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.331722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.331756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.331862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.331913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.332065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.332102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.332261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.332300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.332416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.332454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.332596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.332631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.332799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.332868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.333030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.333086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.333240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.333292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.333452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.333488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.333625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.333663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.333897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.333950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.334077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.334116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.334273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.334312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.334471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.334515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.334645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.334683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.334860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.334898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.335029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.335084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.335288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.335339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.335515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.335551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.335734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.335797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.335950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.336002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.336184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.336236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.336380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.336416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.336599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.336653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.336787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.336828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.336975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.337013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.337185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.337236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.337377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.337411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.337541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.337590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.337778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.337818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.337939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.337978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.338154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.338191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.338353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.338389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.338541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-18 12:06:11.338598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-18 12:06:11.338757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.338797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-18 12:06:11.338944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.338987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-18 12:06:11.339136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.339174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-18 12:06:11.339321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.339354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-18 12:06:11.339517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.339565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-18 12:06:11.339702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.339741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-18 12:06:11.339913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.339951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-18 12:06:11.340095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.340133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-18 12:06:11.340312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.340350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-18 12:06:11.340540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.340577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-18 12:06:11.340708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.340747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-18 12:06:11.340875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.340912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-18 12:06:11.341053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.341090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-18 12:06:11.341205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.341243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-18 12:06:11.341404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.341439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-18 12:06:11.341576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.341624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-18 12:06:11.341737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.341774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-18 12:06:11.341933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.341987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-18 12:06:11.342174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.342225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-18 12:06:11.342355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.342390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-18 12:06:11.342549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.342606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-18 12:06:11.342759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.342798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-18 12:06:11.342944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.342986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-18 12:06:11.343144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.343182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-18 12:06:11.343323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.343363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-18 12:06:11.343520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.343555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-18 12:06:11.343662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.343713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-18 12:06:11.343866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.343903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-18 12:06:11.344054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.344091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-18 12:06:11.344253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.344290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-18 12:06:11.344443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.344483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-18 12:06:11.344679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.344715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-18 12:06:11.344898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.344953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-18 12:06:11.345102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.345155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-18 12:06:11.345284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.345319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-18 12:06:11.345462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.345503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-18 12:06:11.345687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.345727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-18 12:06:11.345850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-18 12:06:11.345888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.346031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.346068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.346235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.346273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.346444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.346482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.346627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.346668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.346827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.346886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.347043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.347094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.347221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.347275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.347403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.347438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.347613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.347667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.347859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.347900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.348041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.348079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.348231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.348308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.348526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.348578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.348735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.348774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.348991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.349028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.349177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.349214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.349322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.349360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.349536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.349571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.349702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.349754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.349910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.349962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.350147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.350198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.350308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.350342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.350459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.350514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.350655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.350691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.350864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.350915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.351121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.351158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.351307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.351345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.351470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.351536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.351691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.351731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.351874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.351911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.352070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.352107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.352278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.352316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.352468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.352515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.352680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.352714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.352864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.352913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.353037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.353074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.353225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.353262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-18 12:06:11.353418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-18 12:06:11.353452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-18 12:06:11.353596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-18 12:06:11.353630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-18 12:06:11.353785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-18 12:06:11.353823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-18 12:06:11.353991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-18 12:06:11.354028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-18 12:06:11.354140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-18 12:06:11.354179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-18 12:06:11.354358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-18 12:06:11.354395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-18 12:06:11.354550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-18 12:06:11.354606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-18 12:06:11.354779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-18 12:06:11.354819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-18 12:06:11.354978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-18 12:06:11.355031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-18 12:06:11.355153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-18 12:06:11.355190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-18 12:06:11.355337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-18 12:06:11.355375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-18 12:06:11.355512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-18 12:06:11.355547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-18 12:06:11.355676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-18 12:06:11.355710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-18 12:06:11.355872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-18 12:06:11.355923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-18 12:06:11.356153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-18 12:06:11.356190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-18 12:06:11.356346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-18 12:06:11.356383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-18 12:06:11.356521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-18 12:06:11.356576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-18 12:06:11.356710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-18 12:06:11.356743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-18 12:06:11.356917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-18 12:06:11.356979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-18 12:06:11.357155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-18 12:06:11.357192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-18 12:06:11.357335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-18 12:06:11.357369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-18 12:06:11.357510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-18 12:06:11.357552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-18 12:06:11.357703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-18 12:06:11.357751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-18 12:06:11.357935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-18 12:06:11.357997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-18 12:06:11.358145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-18 12:06:11.358219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-18 12:06:11.358337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-18 12:06:11.358374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-18 12:06:11.358539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-18 12:06:11.358573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-18 12:06:11.358704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-18 12:06:11.358736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-18 12:06:11.358882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-18 12:06:11.358916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-18 12:06:11.359075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-18 12:06:11.359111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-18 12:06:11.359256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-18 12:06:11.359293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-18 12:06:11.359437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-18 12:06:11.359476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-18 12:06:11.359627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.553 [2024-11-18 12:06:11.359661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.553 qpair failed and we were unable to recover it. 00:37:45.553 [2024-11-18 12:06:11.359842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.553 [2024-11-18 12:06:11.359896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-11-18 12:06:11.360064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-11-18 12:06:11.360105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-11-18 12:06:11.360224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-11-18 12:06:11.360263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-11-18 12:06:11.360420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-11-18 12:06:11.360459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-11-18 12:06:11.360613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-11-18 12:06:11.360661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-11-18 12:06:11.360837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-11-18 12:06:11.360914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-11-18 12:06:11.361169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-11-18 12:06:11.361229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-11-18 12:06:11.361382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-11-18 12:06:11.361420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-11-18 12:06:11.361571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-11-18 12:06:11.361606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-11-18 12:06:11.361706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-11-18 12:06:11.361740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-11-18 12:06:11.361851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-11-18 12:06:11.361885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-11-18 12:06:11.362057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-11-18 12:06:11.362124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-11-18 12:06:11.362245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-11-18 12:06:11.362283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-11-18 12:06:11.362418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-11-18 12:06:11.362461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-11-18 12:06:11.362648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-11-18 12:06:11.362683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-11-18 12:06:11.362783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-11-18 12:06:11.362827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.835 [2024-11-18 12:06:11.362944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.835 [2024-11-18 12:06:11.362999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.835 qpair failed and we were unable to recover it. 00:37:45.836 [2024-11-18 12:06:11.363190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-11-18 12:06:11.363251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-11-18 12:06:11.363370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-11-18 12:06:11.363410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-11-18 12:06:11.363573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-11-18 12:06:11.363609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-11-18 12:06:11.363735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-11-18 12:06:11.363804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-11-18 12:06:11.363934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-11-18 12:06:11.363987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-11-18 12:06:11.364139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-11-18 12:06:11.364177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-11-18 12:06:11.364292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-11-18 12:06:11.364329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-11-18 12:06:11.364461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-11-18 12:06:11.364501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-11-18 12:06:11.364673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-11-18 12:06:11.364707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-11-18 12:06:11.364837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-11-18 12:06:11.364873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-11-18 12:06:11.365011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-11-18 12:06:11.365048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-11-18 12:06:11.365198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-11-18 12:06:11.365237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-11-18 12:06:11.365350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-11-18 12:06:11.365389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-11-18 12:06:11.365526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-11-18 12:06:11.365567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-11-18 12:06:11.365697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-11-18 12:06:11.365745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-11-18 12:06:11.365891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-11-18 12:06:11.365932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-11-18 12:06:11.366114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-11-18 12:06:11.366153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-11-18 12:06:11.366303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-11-18 12:06:11.366341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-11-18 12:06:11.366481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-11-18 12:06:11.366527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-11-18 12:06:11.366674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-11-18 12:06:11.366708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-11-18 12:06:11.366835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-11-18 12:06:11.366873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-11-18 12:06:11.367027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-11-18 12:06:11.367081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-11-18 12:06:11.367228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-11-18 12:06:11.367265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-11-18 12:06:11.367399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-11-18 12:06:11.367436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-11-18 12:06:11.367596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-11-18 12:06:11.367644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-11-18 12:06:11.367780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-11-18 12:06:11.367820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-11-18 12:06:11.367942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-11-18 12:06:11.367979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-11-18 12:06:11.368156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-11-18 12:06:11.368194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-11-18 12:06:11.368339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-11-18 12:06:11.368387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-11-18 12:06:11.368540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-11-18 12:06:11.368576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-11-18 12:06:11.368678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-11-18 12:06:11.368712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-11-18 12:06:11.368863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-11-18 12:06:11.368900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-11-18 12:06:11.369126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-11-18 12:06:11.369186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-11-18 12:06:11.369342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-11-18 12:06:11.369379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-11-18 12:06:11.369518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-11-18 12:06:11.369562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-11-18 12:06:11.369687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-11-18 12:06:11.369736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.836 [2024-11-18 12:06:11.369902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.836 [2024-11-18 12:06:11.369963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.836 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.370077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.370113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.370240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.370274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.370426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.370476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.370705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.370741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.370857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.370892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.371013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.371103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.371232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.371269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.371414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.371453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.371607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.371642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.371749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.371803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.371990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.372024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.372159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.372213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.372405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.372459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.372659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.372707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.372870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.372944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.373087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.373151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.373351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.373406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.373589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.373638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.373771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.373816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.374024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.374058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.374324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.374384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.374562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.374597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.374714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.374749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.374899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.374937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.375067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.375162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.375299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.375349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.375556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.375604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.375751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.375787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.375919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.375958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.376082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.376120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.376237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.376274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.376455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.376500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.376618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.376653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.376797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.376865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.377064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.377118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.377273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.377326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.377423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.377458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.837 qpair failed and we were unable to recover it. 00:37:45.837 [2024-11-18 12:06:11.377660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.837 [2024-11-18 12:06:11.377714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-11-18 12:06:11.377872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-11-18 12:06:11.377911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-11-18 12:06:11.378117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-11-18 12:06:11.378183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-11-18 12:06:11.378331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-11-18 12:06:11.378369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-11-18 12:06:11.378506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-11-18 12:06:11.378541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-11-18 12:06:11.378649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-11-18 12:06:11.378703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-11-18 12:06:11.378842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-11-18 12:06:11.378880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-11-18 12:06:11.378997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-11-18 12:06:11.379035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-11-18 12:06:11.379205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-11-18 12:06:11.379242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-11-18 12:06:11.379402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-11-18 12:06:11.379451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-11-18 12:06:11.379614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-11-18 12:06:11.379651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-11-18 12:06:11.379764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-11-18 12:06:11.379801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-11-18 12:06:11.380000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-11-18 12:06:11.380066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-11-18 12:06:11.380312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-11-18 12:06:11.380370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-11-18 12:06:11.380518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-11-18 12:06:11.380555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-11-18 12:06:11.380714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-11-18 12:06:11.380753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-11-18 12:06:11.380910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-11-18 12:06:11.380948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-11-18 12:06:11.381090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-11-18 12:06:11.381135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-11-18 12:06:11.381288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-11-18 12:06:11.381325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-11-18 12:06:11.381467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-11-18 12:06:11.381512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-11-18 12:06:11.381673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-11-18 12:06:11.381709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-11-18 12:06:11.381838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-11-18 12:06:11.381891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-11-18 12:06:11.381994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-11-18 12:06:11.382028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-11-18 12:06:11.382155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-11-18 12:06:11.382207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-11-18 12:06:11.382315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-11-18 12:06:11.382350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-11-18 12:06:11.382486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-11-18 12:06:11.382527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-11-18 12:06:11.382692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-11-18 12:06:11.382733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-11-18 12:06:11.382841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-11-18 12:06:11.382875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-11-18 12:06:11.383044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-11-18 12:06:11.383078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-11-18 12:06:11.383224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-11-18 12:06:11.383260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-11-18 12:06:11.383373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-11-18 12:06:11.383407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-11-18 12:06:11.383556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-11-18 12:06:11.383595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-11-18 12:06:11.383748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-11-18 12:06:11.383785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-11-18 12:06:11.383927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-11-18 12:06:11.383965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-11-18 12:06:11.384143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-11-18 12:06:11.384181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-11-18 12:06:11.384339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-11-18 12:06:11.384376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-11-18 12:06:11.384526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-11-18 12:06:11.384595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.838 [2024-11-18 12:06:11.384738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.838 [2024-11-18 12:06:11.384807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.838 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.385058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.385119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.385328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.385402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.385531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.385586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.385748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.385783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.385936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.385995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.386114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.386152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.386328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.386413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.386582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.386616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.386789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.386844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.387114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.387173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.387353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.387418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.387584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.387619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.387800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.387853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.388012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.388064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.388179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.388214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.388378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.388412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.388544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.388593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.388703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.388739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.388881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.388916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.389135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.389199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.389351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.389389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.389509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.389561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.389709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.389747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.389894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.389933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.390054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.390093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.390245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.390283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.390406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.390444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.390629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.390678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.390811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.390864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.391049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.391102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.391232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.391285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.391405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.391442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.391615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.391664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.391783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.391838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.391966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.391999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.392176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.392210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.392336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.392371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.839 qpair failed and we were unable to recover it. 00:37:45.839 [2024-11-18 12:06:11.392510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.839 [2024-11-18 12:06:11.392544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-11-18 12:06:11.392704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-11-18 12:06:11.392767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-11-18 12:06:11.392974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-11-18 12:06:11.393040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-11-18 12:06:11.393292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-11-18 12:06:11.393360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-11-18 12:06:11.393497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-11-18 12:06:11.393543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-11-18 12:06:11.393661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-11-18 12:06:11.393696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-11-18 12:06:11.393853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-11-18 12:06:11.393891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-11-18 12:06:11.394058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-11-18 12:06:11.394125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-11-18 12:06:11.394245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-11-18 12:06:11.394283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-11-18 12:06:11.394420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-11-18 12:06:11.394454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-11-18 12:06:11.394625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-11-18 12:06:11.394659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-11-18 12:06:11.394785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-11-18 12:06:11.394823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-11-18 12:06:11.394949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-11-18 12:06:11.395000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-11-18 12:06:11.395159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-11-18 12:06:11.395212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-11-18 12:06:11.395334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-11-18 12:06:11.395373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-11-18 12:06:11.395556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-11-18 12:06:11.395590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-11-18 12:06:11.395733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-11-18 12:06:11.395787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-11-18 12:06:11.395938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-11-18 12:06:11.395989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-11-18 12:06:11.396175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-11-18 12:06:11.396213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-11-18 12:06:11.396381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-11-18 12:06:11.396415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-11-18 12:06:11.396583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-11-18 12:06:11.396633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-11-18 12:06:11.396828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-11-18 12:06:11.396877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-11-18 12:06:11.397001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-11-18 12:06:11.397040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-11-18 12:06:11.397193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-11-18 12:06:11.397232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-11-18 12:06:11.397399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-11-18 12:06:11.397433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-11-18 12:06:11.397651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-11-18 12:06:11.397685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-11-18 12:06:11.397847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-11-18 12:06:11.397898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-11-18 12:06:11.398102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-11-18 12:06:11.398161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-11-18 12:06:11.398291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-11-18 12:06:11.398342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-11-18 12:06:11.398488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-11-18 12:06:11.398551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-11-18 12:06:11.398685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-11-18 12:06:11.398718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-11-18 12:06:11.398851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.840 [2024-11-18 12:06:11.398885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.840 qpair failed and we were unable to recover it. 00:37:45.840 [2024-11-18 12:06:11.399031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.399069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-11-18 12:06:11.399210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.399248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-11-18 12:06:11.399388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.399441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-11-18 12:06:11.399582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.399617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-11-18 12:06:11.399765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.399800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-11-18 12:06:11.399986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.400023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-11-18 12:06:11.400164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.400202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-11-18 12:06:11.400330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.400383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-11-18 12:06:11.400567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.400616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-11-18 12:06:11.400802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.400850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-11-18 12:06:11.401015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.401054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-11-18 12:06:11.401231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.401268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-11-18 12:06:11.401406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.401440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-11-18 12:06:11.401550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.401585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-11-18 12:06:11.401720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.401754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-11-18 12:06:11.401973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.402015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-11-18 12:06:11.402208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.402265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-11-18 12:06:11.402447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.402507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-11-18 12:06:11.402709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.402759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-11-18 12:06:11.403019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.403092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-11-18 12:06:11.403239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.403296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-11-18 12:06:11.403454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.403507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-11-18 12:06:11.403644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.403679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-11-18 12:06:11.403848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.403920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-11-18 12:06:11.404104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.404167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-11-18 12:06:11.404368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.404406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-11-18 12:06:11.404575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.404624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-11-18 12:06:11.404814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.404862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-11-18 12:06:11.405042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.405121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-11-18 12:06:11.405289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.405343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-11-18 12:06:11.405457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.405498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-11-18 12:06:11.405642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.405677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-11-18 12:06:11.405798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.405856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-11-18 12:06:11.406007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.406059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-11-18 12:06:11.406169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.406203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-11-18 12:06:11.406346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.406381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-11-18 12:06:11.406519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.406554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.841 [2024-11-18 12:06:11.406669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.841 [2024-11-18 12:06:11.406703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.841 qpair failed and we were unable to recover it. 00:37:45.842 [2024-11-18 12:06:11.406860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-11-18 12:06:11.406899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-11-18 12:06:11.407149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-11-18 12:06:11.407202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-11-18 12:06:11.407325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-11-18 12:06:11.407365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-11-18 12:06:11.407517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-11-18 12:06:11.407552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-11-18 12:06:11.407694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-11-18 12:06:11.407731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-11-18 12:06:11.407907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-11-18 12:06:11.407945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-11-18 12:06:11.408152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-11-18 12:06:11.408190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-11-18 12:06:11.408339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-11-18 12:06:11.408391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-11-18 12:06:11.408543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-11-18 12:06:11.408578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-11-18 12:06:11.408687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-11-18 12:06:11.408722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-11-18 12:06:11.408857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-11-18 12:06:11.408892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-11-18 12:06:11.409069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-11-18 12:06:11.409106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-11-18 12:06:11.409304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-11-18 12:06:11.409342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-11-18 12:06:11.409479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-11-18 12:06:11.409524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-11-18 12:06:11.409695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-11-18 12:06:11.409744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-11-18 12:06:11.409957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-11-18 12:06:11.410023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-11-18 12:06:11.410277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-11-18 12:06:11.410334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-11-18 12:06:11.410500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-11-18 12:06:11.410541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-11-18 12:06:11.410683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-11-18 12:06:11.410719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-11-18 12:06:11.410852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-11-18 12:06:11.410906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-11-18 12:06:11.411061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-11-18 12:06:11.411099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-11-18 12:06:11.411221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-11-18 12:06:11.411261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-11-18 12:06:11.411417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-11-18 12:06:11.411456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-11-18 12:06:11.411666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-11-18 12:06:11.411714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-11-18 12:06:11.411863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-11-18 12:06:11.411901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-11-18 12:06:11.412038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-11-18 12:06:11.412110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-11-18 12:06:11.412301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-11-18 12:06:11.412339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-11-18 12:06:11.412509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-11-18 12:06:11.412544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-11-18 12:06:11.412667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-11-18 12:06:11.412703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.842 qpair failed and we were unable to recover it. 00:37:45.842 [2024-11-18 12:06:11.412846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.842 [2024-11-18 12:06:11.412881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-11-18 12:06:11.413141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-11-18 12:06:11.413196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-11-18 12:06:11.413344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-11-18 12:06:11.413382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-11-18 12:06:11.413516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-11-18 12:06:11.413570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-11-18 12:06:11.413748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-11-18 12:06:11.413796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-11-18 12:06:11.413925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-11-18 12:06:11.413981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-11-18 12:06:11.414162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-11-18 12:06:11.414215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-11-18 12:06:11.414343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-11-18 12:06:11.414384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-11-18 12:06:11.414571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-11-18 12:06:11.414621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-11-18 12:06:11.414798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-11-18 12:06:11.414840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-11-18 12:06:11.415079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-11-18 12:06:11.415119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-11-18 12:06:11.415285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-11-18 12:06:11.415343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-11-18 12:06:11.415503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-11-18 12:06:11.415540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-11-18 12:06:11.415657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-11-18 12:06:11.415695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-11-18 12:06:11.415835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-11-18 12:06:11.415870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-11-18 12:06:11.416042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-11-18 12:06:11.416077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-11-18 12:06:11.416271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-11-18 12:06:11.416309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-11-18 12:06:11.416459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-11-18 12:06:11.416513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-11-18 12:06:11.416665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-11-18 12:06:11.416714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-11-18 12:06:11.416880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-11-18 12:06:11.416935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-11-18 12:06:11.417154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-11-18 12:06:11.417207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-11-18 12:06:11.417337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-11-18 12:06:11.417372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-11-18 12:06:11.417476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-11-18 12:06:11.417518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-11-18 12:06:11.417643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-11-18 12:06:11.417681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-11-18 12:06:11.417857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-11-18 12:06:11.417911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-11-18 12:06:11.418039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-11-18 12:06:11.418078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-11-18 12:06:11.418248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-11-18 12:06:11.418296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-11-18 12:06:11.418440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-11-18 12:06:11.418476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-11-18 12:06:11.418618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-11-18 12:06:11.418659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-11-18 12:06:11.418798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-11-18 12:06:11.418833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-11-18 12:06:11.418971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-11-18 12:06:11.419006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-11-18 12:06:11.419141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-11-18 12:06:11.419195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-11-18 12:06:11.419311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-11-18 12:06:11.419348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-11-18 12:06:11.419508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-11-18 12:06:11.419560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-11-18 12:06:11.419705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-11-18 12:06:11.419758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-11-18 12:06:11.419885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.843 [2024-11-18 12:06:11.419924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.843 qpair failed and we were unable to recover it. 00:37:45.843 [2024-11-18 12:06:11.420078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-11-18 12:06:11.420117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-11-18 12:06:11.420262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-11-18 12:06:11.420299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-11-18 12:06:11.420504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-11-18 12:06:11.420546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-11-18 12:06:11.420681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-11-18 12:06:11.420717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-11-18 12:06:11.420875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-11-18 12:06:11.420928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-11-18 12:06:11.421063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-11-18 12:06:11.421116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-11-18 12:06:11.421377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-11-18 12:06:11.421424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-11-18 12:06:11.421573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-11-18 12:06:11.421629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-11-18 12:06:11.421779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-11-18 12:06:11.421818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-11-18 12:06:11.422008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-11-18 12:06:11.422066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-11-18 12:06:11.422289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-11-18 12:06:11.422346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-11-18 12:06:11.422501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-11-18 12:06:11.422556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-11-18 12:06:11.422677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-11-18 12:06:11.422717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-11-18 12:06:11.422909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-11-18 12:06:11.422963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-11-18 12:06:11.423087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-11-18 12:06:11.423127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-11-18 12:06:11.423365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-11-18 12:06:11.423404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-11-18 12:06:11.423533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-11-18 12:06:11.423568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-11-18 12:06:11.423674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-11-18 12:06:11.423709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-11-18 12:06:11.423849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-11-18 12:06:11.423883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-11-18 12:06:11.424073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-11-18 12:06:11.424111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-11-18 12:06:11.424353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-11-18 12:06:11.424407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-11-18 12:06:11.424529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-11-18 12:06:11.424584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-11-18 12:06:11.424750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-11-18 12:06:11.424786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-11-18 12:06:11.425035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-11-18 12:06:11.425094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-11-18 12:06:11.425269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-11-18 12:06:11.425333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-11-18 12:06:11.425447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-11-18 12:06:11.425482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-11-18 12:06:11.425657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-11-18 12:06:11.425691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-11-18 12:06:11.425848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-11-18 12:06:11.425886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.844 qpair failed and we were unable to recover it. 00:37:45.844 [2024-11-18 12:06:11.426038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.844 [2024-11-18 12:06:11.426076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-11-18 12:06:11.426278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-11-18 12:06:11.426315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-11-18 12:06:11.426473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-11-18 12:06:11.426558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-11-18 12:06:11.426704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-11-18 12:06:11.426740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-11-18 12:06:11.426959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-11-18 12:06:11.427019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-11-18 12:06:11.427197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-11-18 12:06:11.427295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-11-18 12:06:11.427432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-11-18 12:06:11.427486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-11-18 12:06:11.427672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-11-18 12:06:11.427713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-11-18 12:06:11.427833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-11-18 12:06:11.427874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-11-18 12:06:11.428005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-11-18 12:06:11.428044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-11-18 12:06:11.428267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-11-18 12:06:11.428333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-11-18 12:06:11.428475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-11-18 12:06:11.428520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-11-18 12:06:11.428670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-11-18 12:06:11.428710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-11-18 12:06:11.428884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-11-18 12:06:11.428922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-11-18 12:06:11.429150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-11-18 12:06:11.429210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-11-18 12:06:11.429351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-11-18 12:06:11.429386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-11-18 12:06:11.429503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-11-18 12:06:11.429558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-11-18 12:06:11.429697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-11-18 12:06:11.429732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-11-18 12:06:11.429931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-11-18 12:06:11.429970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-11-18 12:06:11.430199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-11-18 12:06:11.430261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-11-18 12:06:11.430419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-11-18 12:06:11.430457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-11-18 12:06:11.430651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-11-18 12:06:11.430690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-11-18 12:06:11.430877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-11-18 12:06:11.430916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-11-18 12:06:11.431059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-11-18 12:06:11.431098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-11-18 12:06:11.431221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-11-18 12:06:11.431259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-11-18 12:06:11.431446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-11-18 12:06:11.431482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-11-18 12:06:11.431625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-11-18 12:06:11.431674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-11-18 12:06:11.431832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-11-18 12:06:11.431872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-11-18 12:06:11.432046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-11-18 12:06:11.432085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-11-18 12:06:11.432259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-11-18 12:06:11.432297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-11-18 12:06:11.432441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.845 [2024-11-18 12:06:11.432503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.845 qpair failed and we were unable to recover it. 00:37:45.845 [2024-11-18 12:06:11.432639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-11-18 12:06:11.432676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-11-18 12:06:11.432805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-11-18 12:06:11.432843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-11-18 12:06:11.433107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-11-18 12:06:11.433165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-11-18 12:06:11.433312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-11-18 12:06:11.433363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-11-18 12:06:11.433483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-11-18 12:06:11.433525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-11-18 12:06:11.433660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-11-18 12:06:11.433695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-11-18 12:06:11.433832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-11-18 12:06:11.433867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-11-18 12:06:11.434040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-11-18 12:06:11.434109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-11-18 12:06:11.434257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-11-18 12:06:11.434312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-11-18 12:06:11.434427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-11-18 12:06:11.434461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-11-18 12:06:11.434630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-11-18 12:06:11.434665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-11-18 12:06:11.434846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-11-18 12:06:11.434881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-11-18 12:06:11.434996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-11-18 12:06:11.435063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-11-18 12:06:11.435195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-11-18 12:06:11.435235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-11-18 12:06:11.435379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-11-18 12:06:11.435413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-11-18 12:06:11.435554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-11-18 12:06:11.435589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-11-18 12:06:11.435748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-11-18 12:06:11.435786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-11-18 12:06:11.435950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-11-18 12:06:11.435988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-11-18 12:06:11.436138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-11-18 12:06:11.436176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-11-18 12:06:11.436358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-11-18 12:06:11.436397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-11-18 12:06:11.436561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-11-18 12:06:11.436596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-11-18 12:06:11.436700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-11-18 12:06:11.436734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-11-18 12:06:11.436955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-11-18 12:06:11.437018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-11-18 12:06:11.437200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-11-18 12:06:11.437238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-11-18 12:06:11.437363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-11-18 12:06:11.437403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-11-18 12:06:11.437601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-11-18 12:06:11.437639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-11-18 12:06:11.437770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-11-18 12:06:11.437828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-11-18 12:06:11.438020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-11-18 12:06:11.438072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-11-18 12:06:11.438178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-11-18 12:06:11.438213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-11-18 12:06:11.438318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.846 [2024-11-18 12:06:11.438352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.846 qpair failed and we were unable to recover it. 00:37:45.846 [2024-11-18 12:06:11.438464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-11-18 12:06:11.438510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-11-18 12:06:11.438643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-11-18 12:06:11.438678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-11-18 12:06:11.438790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-11-18 12:06:11.438824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-11-18 12:06:11.438962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-11-18 12:06:11.438995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-11-18 12:06:11.439129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-11-18 12:06:11.439165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-11-18 12:06:11.439299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-11-18 12:06:11.439333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-11-18 12:06:11.439441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-11-18 12:06:11.439475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-11-18 12:06:11.439624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-11-18 12:06:11.439661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-11-18 12:06:11.439803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-11-18 12:06:11.439840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-11-18 12:06:11.439987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-11-18 12:06:11.440022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-11-18 12:06:11.440142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-11-18 12:06:11.440182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-11-18 12:06:11.440341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-11-18 12:06:11.440390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-11-18 12:06:11.440513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-11-18 12:06:11.440550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-11-18 12:06:11.440663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-11-18 12:06:11.440699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-11-18 12:06:11.440854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-11-18 12:06:11.440893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-11-18 12:06:11.441042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-11-18 12:06:11.441080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-11-18 12:06:11.441201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-11-18 12:06:11.441252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-11-18 12:06:11.441442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-11-18 12:06:11.441478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-11-18 12:06:11.441608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-11-18 12:06:11.441655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-11-18 12:06:11.441843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-11-18 12:06:11.441910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-11-18 12:06:11.442119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-11-18 12:06:11.442159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-11-18 12:06:11.442306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-11-18 12:06:11.442344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-11-18 12:06:11.442500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-11-18 12:06:11.442553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-11-18 12:06:11.442682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-11-18 12:06:11.442720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-11-18 12:06:11.442859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-11-18 12:06:11.442899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-11-18 12:06:11.443072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-11-18 12:06:11.443110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-11-18 12:06:11.443377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-11-18 12:06:11.443436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-11-18 12:06:11.443609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-11-18 12:06:11.443645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-11-18 12:06:11.443825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-11-18 12:06:11.443893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-11-18 12:06:11.444027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-11-18 12:06:11.444082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-11-18 12:06:11.444283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-11-18 12:06:11.444346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-11-18 12:06:11.444470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.847 [2024-11-18 12:06:11.444519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.847 qpair failed and we were unable to recover it. 00:37:45.847 [2024-11-18 12:06:11.444674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-11-18 12:06:11.444709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-11-18 12:06:11.444858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-11-18 12:06:11.444925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-11-18 12:06:11.445193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-11-18 12:06:11.445253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-11-18 12:06:11.445438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-11-18 12:06:11.445472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-11-18 12:06:11.445597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-11-18 12:06:11.445632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-11-18 12:06:11.445763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-11-18 12:06:11.445801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-11-18 12:06:11.445982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-11-18 12:06:11.446019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-11-18 12:06:11.446162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-11-18 12:06:11.446232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-11-18 12:06:11.446387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-11-18 12:06:11.446425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-11-18 12:06:11.446617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-11-18 12:06:11.446653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-11-18 12:06:11.446762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-11-18 12:06:11.446813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-11-18 12:06:11.446966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-11-18 12:06:11.447005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-11-18 12:06:11.447216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-11-18 12:06:11.447286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-11-18 12:06:11.447428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-11-18 12:06:11.447479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-11-18 12:06:11.447650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-11-18 12:06:11.447684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-11-18 12:06:11.447887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-11-18 12:06:11.447925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-11-18 12:06:11.448090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-11-18 12:06:11.448159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-11-18 12:06:11.448299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-11-18 12:06:11.448337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-11-18 12:06:11.448518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-11-18 12:06:11.448571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-11-18 12:06:11.448713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-11-18 12:06:11.448749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-11-18 12:06:11.448906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-11-18 12:06:11.448944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-11-18 12:06:11.449152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-11-18 12:06:11.449221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-11-18 12:06:11.449360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-11-18 12:06:11.449394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-11-18 12:06:11.449532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-11-18 12:06:11.449567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-11-18 12:06:11.449710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-11-18 12:06:11.449744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-11-18 12:06:11.449872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-11-18 12:06:11.449910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-11-18 12:06:11.450030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-11-18 12:06:11.450068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-11-18 12:06:11.450242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-11-18 12:06:11.450279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-11-18 12:06:11.450434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-11-18 12:06:11.450468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-11-18 12:06:11.450639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-11-18 12:06:11.450673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-11-18 12:06:11.450809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-11-18 12:06:11.450862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-11-18 12:06:11.451078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-11-18 12:06:11.451118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-11-18 12:06:11.451251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.848 [2024-11-18 12:06:11.451290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.848 qpair failed and we were unable to recover it. 00:37:45.848 [2024-11-18 12:06:11.451407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-11-18 12:06:11.451446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-11-18 12:06:11.451591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-11-18 12:06:11.451626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-11-18 12:06:11.451760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-11-18 12:06:11.451794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-11-18 12:06:11.451951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-11-18 12:06:11.451989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-11-18 12:06:11.452121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-11-18 12:06:11.452172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-11-18 12:06:11.452316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-11-18 12:06:11.452353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-11-18 12:06:11.452500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-11-18 12:06:11.452568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-11-18 12:06:11.452689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-11-18 12:06:11.452737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-11-18 12:06:11.452882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-11-18 12:06:11.452936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-11-18 12:06:11.453112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-11-18 12:06:11.453151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-11-18 12:06:11.453277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-11-18 12:06:11.453315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-11-18 12:06:11.453469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-11-18 12:06:11.453511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-11-18 12:06:11.453649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-11-18 12:06:11.453683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-11-18 12:06:11.453817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-11-18 12:06:11.453854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-11-18 12:06:11.453997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-11-18 12:06:11.454034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-11-18 12:06:11.454184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-11-18 12:06:11.454222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-11-18 12:06:11.454371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-11-18 12:06:11.454409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-11-18 12:06:11.454533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-11-18 12:06:11.454567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-11-18 12:06:11.454713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-11-18 12:06:11.454752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-11-18 12:06:11.454903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-11-18 12:06:11.454958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-11-18 12:06:11.455104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-11-18 12:06:11.455142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-11-18 12:06:11.455293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-11-18 12:06:11.455332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-11-18 12:06:11.455481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-11-18 12:06:11.455528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-11-18 12:06:11.455700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-11-18 12:06:11.455749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-11-18 12:06:11.455911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-11-18 12:06:11.455949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-11-18 12:06:11.456115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-11-18 12:06:11.456158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-11-18 12:06:11.456296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-11-18 12:06:11.456347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-11-18 12:06:11.456484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-11-18 12:06:11.456528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-11-18 12:06:11.456671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-11-18 12:06:11.456705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-11-18 12:06:11.456834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-11-18 12:06:11.456873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-11-18 12:06:11.457061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-11-18 12:06:11.457161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-11-18 12:06:11.457307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-11-18 12:06:11.457346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-11-18 12:06:11.457514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-11-18 12:06:11.457548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-11-18 12:06:11.457693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-11-18 12:06:11.457728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-11-18 12:06:11.457875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-11-18 12:06:11.457913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-11-18 12:06:11.458116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-11-18 12:06:11.458173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.849 qpair failed and we were unable to recover it. 00:37:45.849 [2024-11-18 12:06:11.458360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.849 [2024-11-18 12:06:11.458425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-11-18 12:06:11.458571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-11-18 12:06:11.458619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-11-18 12:06:11.458746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-11-18 12:06:11.458815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-11-18 12:06:11.459063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-11-18 12:06:11.459102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-11-18 12:06:11.459229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-11-18 12:06:11.459267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-11-18 12:06:11.459390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-11-18 12:06:11.459441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-11-18 12:06:11.459579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-11-18 12:06:11.459613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-11-18 12:06:11.459747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-11-18 12:06:11.459802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-11-18 12:06:11.459983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-11-18 12:06:11.460022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-11-18 12:06:11.460146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-11-18 12:06:11.460183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-11-18 12:06:11.460319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-11-18 12:06:11.460358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-11-18 12:06:11.460500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-11-18 12:06:11.460537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-11-18 12:06:11.460679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-11-18 12:06:11.460714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-11-18 12:06:11.460903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-11-18 12:06:11.460955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-11-18 12:06:11.461165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-11-18 12:06:11.461223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-11-18 12:06:11.461361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-11-18 12:06:11.461395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-11-18 12:06:11.461603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-11-18 12:06:11.461639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-11-18 12:06:11.461780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-11-18 12:06:11.461833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-11-18 12:06:11.462073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-11-18 12:06:11.462126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-11-18 12:06:11.462372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-11-18 12:06:11.462409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-11-18 12:06:11.462514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-11-18 12:06:11.462549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-11-18 12:06:11.462681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-11-18 12:06:11.462715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-11-18 12:06:11.462929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-11-18 12:06:11.462997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-11-18 12:06:11.463159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-11-18 12:06:11.463216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-11-18 12:06:11.463351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-11-18 12:06:11.463390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-11-18 12:06:11.463520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-11-18 12:06:11.463557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-11-18 12:06:11.463731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-11-18 12:06:11.463765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-11-18 12:06:11.463931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-11-18 12:06:11.463984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-11-18 12:06:11.464136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-11-18 12:06:11.464206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-11-18 12:06:11.464353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-11-18 12:06:11.464396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-11-18 12:06:11.464536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-11-18 12:06:11.464572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-11-18 12:06:11.464733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-11-18 12:06:11.464787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.850 [2024-11-18 12:06:11.464941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.850 [2024-11-18 12:06:11.464976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.850 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.465103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-11-18 12:06:11.465148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.465253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-11-18 12:06:11.465288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.465427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-11-18 12:06:11.465461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.465607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-11-18 12:06:11.465642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.465754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-11-18 12:06:11.465791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.465933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-11-18 12:06:11.465968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.466156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-11-18 12:06:11.466213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.466332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-11-18 12:06:11.466370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.466519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-11-18 12:06:11.466586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.466725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-11-18 12:06:11.466778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.466893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-11-18 12:06:11.466927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.467084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-11-18 12:06:11.467136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.467272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-11-18 12:06:11.467307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.467422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-11-18 12:06:11.467471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.467677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-11-18 12:06:11.467716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.467892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-11-18 12:06:11.467930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.468077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-11-18 12:06:11.468114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.468261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-11-18 12:06:11.468300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.468480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-11-18 12:06:11.468527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.468656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-11-18 12:06:11.468711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.468847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-11-18 12:06:11.468881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.469014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-11-18 12:06:11.469049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.469153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-11-18 12:06:11.469187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.469305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-11-18 12:06:11.469340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.469481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-11-18 12:06:11.469534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.469755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-11-18 12:06:11.469789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.469932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-11-18 12:06:11.469966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.470131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-11-18 12:06:11.470169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.470318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-11-18 12:06:11.470356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.470505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-11-18 12:06:11.470548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.470796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-11-18 12:06:11.470835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.470978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-11-18 12:06:11.471015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.471185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-11-18 12:06:11.471244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.471378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-11-18 12:06:11.471415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.471569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-11-18 12:06:11.471604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.471749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-11-18 12:06:11.471802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.472065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.851 [2024-11-18 12:06:11.472134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.851 qpair failed and we were unable to recover it. 00:37:45.851 [2024-11-18 12:06:11.472255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.472307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-11-18 12:06:11.472460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.472513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-11-18 12:06:11.472640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.472674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-11-18 12:06:11.472829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.472894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-11-18 12:06:11.473126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.473166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-11-18 12:06:11.473400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.473480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-11-18 12:06:11.473735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.473788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-11-18 12:06:11.473926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.473977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-11-18 12:06:11.474117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.474155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-11-18 12:06:11.474277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.474314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-11-18 12:06:11.474497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.474546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-11-18 12:06:11.474677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.474725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-11-18 12:06:11.474882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.474921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-11-18 12:06:11.475066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.475100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-11-18 12:06:11.475293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.475380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-11-18 12:06:11.475540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.475575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-11-18 12:06:11.475721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.475756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-11-18 12:06:11.475906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.475945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-11-18 12:06:11.476092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.476129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-11-18 12:06:11.476246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.476284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-11-18 12:06:11.476474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.476521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-11-18 12:06:11.476669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.476717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-11-18 12:06:11.476879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.476939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-11-18 12:06:11.477176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.477241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-11-18 12:06:11.477448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.477486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-11-18 12:06:11.477655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.477702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-11-18 12:06:11.477891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.477927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-11-18 12:06:11.478083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.478121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-11-18 12:06:11.478271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.478310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-11-18 12:06:11.478538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.478589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-11-18 12:06:11.478754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.478788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-11-18 12:06:11.478909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.478947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-11-18 12:06:11.479095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.479134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-11-18 12:06:11.479314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.479352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-11-18 12:06:11.479521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.479575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-11-18 12:06:11.479685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.479719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-11-18 12:06:11.479901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.479935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.852 qpair failed and we were unable to recover it. 00:37:45.852 [2024-11-18 12:06:11.480086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.852 [2024-11-18 12:06:11.480120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.480360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-11-18 12:06:11.480398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.480560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-11-18 12:06:11.480599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.480718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-11-18 12:06:11.480751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.480857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-11-18 12:06:11.480890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.481072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-11-18 12:06:11.481109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.481249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-11-18 12:06:11.481286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.481462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-11-18 12:06:11.481542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.481665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-11-18 12:06:11.481714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.481852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-11-18 12:06:11.481892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.482152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-11-18 12:06:11.482213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.482358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-11-18 12:06:11.482396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.482531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-11-18 12:06:11.482565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.482728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-11-18 12:06:11.482761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.482921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-11-18 12:06:11.482959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.483164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-11-18 12:06:11.483202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.483374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-11-18 12:06:11.483427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.483582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-11-18 12:06:11.483630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.483791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-11-18 12:06:11.483839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.484058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-11-18 12:06:11.484132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.484311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-11-18 12:06:11.484349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.484488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-11-18 12:06:11.484559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.484671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-11-18 12:06:11.484705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.484812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-11-18 12:06:11.484847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.485027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-11-18 12:06:11.485066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.485237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-11-18 12:06:11.485275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.485451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-11-18 12:06:11.485498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.485644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-11-18 12:06:11.485693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.485884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-11-18 12:06:11.485933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.486083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-11-18 12:06:11.486141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.486310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-11-18 12:06:11.486364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.486529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-11-18 12:06:11.486564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.486705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-11-18 12:06:11.486759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.486984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-11-18 12:06:11.487042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.487167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-11-18 12:06:11.487205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.487361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-11-18 12:06:11.487394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.487520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-11-18 12:06:11.487554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.487663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.853 [2024-11-18 12:06:11.487697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.853 qpair failed and we were unable to recover it. 00:37:45.853 [2024-11-18 12:06:11.487885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.487922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-18 12:06:11.488035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.488073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-18 12:06:11.488188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.488226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-18 12:06:11.488358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.488394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-18 12:06:11.488548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.488602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-18 12:06:11.488780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.488834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-18 12:06:11.489058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.489114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-18 12:06:11.489312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.489375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-18 12:06:11.489525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.489575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-18 12:06:11.489685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.489737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-18 12:06:11.489962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.490019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-18 12:06:11.490224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.490290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-18 12:06:11.490433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.490470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-18 12:06:11.490623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.490659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-18 12:06:11.490788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.490840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-18 12:06:11.490996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.491048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-18 12:06:11.491202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.491256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-18 12:06:11.491407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.491454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-18 12:06:11.491642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.491690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-18 12:06:11.491836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.491891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-18 12:06:11.492022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.492074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-18 12:06:11.492332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.492371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-18 12:06:11.492591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.492626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-18 12:06:11.492758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.492792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-18 12:06:11.492891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.492924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-18 12:06:11.493094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.493132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-18 12:06:11.493257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.493308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-18 12:06:11.493474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.493553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-18 12:06:11.493698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.493735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-18 12:06:11.493962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.493999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-18 12:06:11.494175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.494239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-18 12:06:11.494409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.494443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-18 12:06:11.494607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.494643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-18 12:06:11.494822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.494859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-18 12:06:11.495001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.495038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-18 12:06:11.495217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.495255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-18 12:06:11.495371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.495408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-18 12:06:11.495572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-18 12:06:11.495606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-18 12:06:11.495716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-18 12:06:11.495750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-18 12:06:11.495861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-18 12:06:11.495894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-18 12:06:11.496048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-18 12:06:11.496086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-18 12:06:11.496233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-18 12:06:11.496270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-18 12:06:11.496430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-18 12:06:11.496464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-18 12:06:11.496588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-18 12:06:11.496622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-18 12:06:11.496755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-18 12:06:11.496794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-18 12:06:11.496943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-18 12:06:11.496981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-18 12:06:11.497126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-18 12:06:11.497163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-18 12:06:11.497394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-18 12:06:11.497431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-18 12:06:11.497571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-18 12:06:11.497605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-18 12:06:11.497741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-18 12:06:11.497792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-18 12:06:11.497915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-18 12:06:11.497949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-18 12:06:11.498052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-18 12:06:11.498103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-18 12:06:11.498252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-18 12:06:11.498289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-18 12:06:11.498417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-18 12:06:11.498451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-18 12:06:11.498612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-18 12:06:11.498660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-18 12:06:11.498828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-18 12:06:11.498896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-18 12:06:11.499079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-18 12:06:11.499119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-18 12:06:11.499268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-18 12:06:11.499309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-18 12:06:11.499476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-18 12:06:11.499538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-18 12:06:11.499670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-18 12:06:11.499718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-18 12:06:11.499883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-18 12:06:11.499922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-18 12:06:11.500065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-18 12:06:11.500102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-18 12:06:11.500250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-18 12:06:11.500287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-18 12:06:11.500472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-18 12:06:11.500521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-18 12:06:11.500628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-18 12:06:11.500662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-18 12:06:11.500792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-18 12:06:11.500825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-18 12:06:11.501044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-18 12:06:11.501082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-18 12:06:11.501259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-18 12:06:11.501297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-18 12:06:11.501512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-18 12:06:11.501549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-18 12:06:11.501674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-18 12:06:11.501714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-18 12:06:11.501891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-18 12:06:11.501944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.502234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-18 12:06:11.502315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.502454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-18 12:06:11.502504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.502672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-18 12:06:11.502707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.502816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-18 12:06:11.502850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.502981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-18 12:06:11.503016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.503179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-18 12:06:11.503237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.503359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-18 12:06:11.503397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.503514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-18 12:06:11.503565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.503702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-18 12:06:11.503739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.503900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-18 12:06:11.503943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.504136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-18 12:06:11.504177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.504296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-18 12:06:11.504346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.504475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-18 12:06:11.504517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.504662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-18 12:06:11.504701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.504857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-18 12:06:11.504897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.505069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-18 12:06:11.505106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.505225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-18 12:06:11.505262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.505409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-18 12:06:11.505447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.505624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-18 12:06:11.505672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.505833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-18 12:06:11.505887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.506087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-18 12:06:11.506148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.506285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-18 12:06:11.506336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.506486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-18 12:06:11.506546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.506705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-18 12:06:11.506738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.506971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-18 12:06:11.507034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.507229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-18 12:06:11.507288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.507466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-18 12:06:11.507510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.507654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-18 12:06:11.507688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.507787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-18 12:06:11.507820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.507987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-18 12:06:11.508054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.508195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-18 12:06:11.508250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.508391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-18 12:06:11.508426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.508607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-18 12:06:11.508661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.508770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-18 12:06:11.508804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.508985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-18 12:06:11.509057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.509160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-18 12:06:11.509195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.509345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-18 12:06:11.509393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-18 12:06:11.509520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.509569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-18 12:06:11.509715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.509751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-18 12:06:11.509883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.509919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-18 12:06:11.510056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.510091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-18 12:06:11.510199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.510234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-18 12:06:11.510350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.510386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-18 12:06:11.510509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.510578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-18 12:06:11.510716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.510769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-18 12:06:11.511031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.511090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-18 12:06:11.511322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.511361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-18 12:06:11.511478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.511551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-18 12:06:11.511688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.511723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-18 12:06:11.512000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.512059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-18 12:06:11.512262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.512317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-18 12:06:11.512487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.512555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-18 12:06:11.512678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.512713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-18 12:06:11.512917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.512994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-18 12:06:11.513253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.513313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-18 12:06:11.513462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.513508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-18 12:06:11.513658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.513693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-18 12:06:11.513798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.513851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-18 12:06:11.514002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.514040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-18 12:06:11.514248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.514286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-18 12:06:11.514471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.514533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-18 12:06:11.514660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.514709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-18 12:06:11.514920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.514979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-18 12:06:11.515129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.515168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-18 12:06:11.515358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.515398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-18 12:06:11.515581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.515630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-18 12:06:11.515789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.515842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-18 12:06:11.515992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.516054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-18 12:06:11.516304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.516362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-18 12:06:11.516509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.516563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-18 12:06:11.516675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.516709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-18 12:06:11.516873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.516912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-18 12:06:11.517060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.517099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-18 12:06:11.517269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.517332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-18 12:06:11.517511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-18 12:06:11.517573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.517759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-18 12:06:11.517808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.517973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-18 12:06:11.518029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.518220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-18 12:06:11.518297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.518412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-18 12:06:11.518448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.518600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-18 12:06:11.518635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.518770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-18 12:06:11.518819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.518988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-18 12:06:11.519025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.519162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-18 12:06:11.519196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.519340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-18 12:06:11.519375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.519481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-18 12:06:11.519532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.519646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-18 12:06:11.519679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.519837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-18 12:06:11.519873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.520058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-18 12:06:11.520096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.520217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-18 12:06:11.520253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.520384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-18 12:06:11.520419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.520536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-18 12:06:11.520571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.520706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-18 12:06:11.520746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.520966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-18 12:06:11.521003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.521231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-18 12:06:11.521295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.521448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-18 12:06:11.521485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.521629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-18 12:06:11.521662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.521841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-18 12:06:11.521878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.522035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-18 12:06:11.522094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.522293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-18 12:06:11.522330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.522444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-18 12:06:11.522480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.522667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-18 12:06:11.522716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.522879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-18 12:06:11.522927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.523087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-18 12:06:11.523168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.523298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-18 12:06:11.523337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.523516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-18 12:06:11.523551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.523658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-18 12:06:11.523692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.523830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-18 12:06:11.523864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.524028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-18 12:06:11.524062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.524220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-18 12:06:11.524253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.524354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-18 12:06:11.524387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.524533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-18 12:06:11.524572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.524680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-18 12:06:11.524721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-18 12:06:11.524881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-18 12:06:11.524939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-18 12:06:11.525202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-18 12:06:11.525275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-18 12:06:11.525428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-18 12:06:11.525466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-18 12:06:11.525624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-18 12:06:11.525658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-18 12:06:11.525781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-18 12:06:11.525817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-18 12:06:11.525963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-18 12:06:11.526001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-18 12:06:11.526147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-18 12:06:11.526187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-18 12:06:11.526317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-18 12:06:11.526353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-18 12:06:11.526509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-18 12:06:11.526563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-18 12:06:11.526732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-18 12:06:11.526772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-18 12:06:11.526902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-18 12:06:11.526959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-18 12:06:11.527125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-18 12:06:11.527181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-18 12:06:11.527324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-18 12:06:11.527377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-18 12:06:11.527568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-18 12:06:11.527618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-18 12:06:11.527772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-18 12:06:11.527825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-18 12:06:11.527945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-18 12:06:11.527998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-18 12:06:11.528154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-18 12:06:11.528205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-18 12:06:11.528342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-18 12:06:11.528376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-18 12:06:11.528482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-18 12:06:11.528525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-18 12:06:11.528689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-18 12:06:11.528725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-18 12:06:11.528874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-18 12:06:11.528908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-18 12:06:11.529066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-18 12:06:11.529103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-18 12:06:11.529288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-18 12:06:11.529324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-18 12:06:11.529474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-18 12:06:11.529546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-18 12:06:11.529709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-18 12:06:11.529757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-18 12:06:11.529983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-18 12:06:11.530023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-18 12:06:11.530173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-18 12:06:11.530210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-18 12:06:11.530363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-18 12:06:11.530401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-18 12:06:11.530583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-18 12:06:11.530632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-18 12:06:11.530884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-18 12:06:11.530956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-18 12:06:11.531191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-18 12:06:11.531251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-18 12:06:11.531381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-18 12:06:11.531419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-18 12:06:11.531605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-18 12:06:11.531641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-18 12:06:11.531814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.531848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-18 12:06:11.531948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.531982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-18 12:06:11.532170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.532209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-18 12:06:11.532382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.532417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-18 12:06:11.532577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.532626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-18 12:06:11.532788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.532823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-18 12:06:11.533003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.533041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-18 12:06:11.533185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.533223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-18 12:06:11.533392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.533428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-18 12:06:11.533616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.533664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-18 12:06:11.533831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.533870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-18 12:06:11.534020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.534054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-18 12:06:11.534204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.534239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-18 12:06:11.534352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.534386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-18 12:06:11.534520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.534568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-18 12:06:11.534721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.534757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-18 12:06:11.534911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.534949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-18 12:06:11.535092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.535129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-18 12:06:11.535309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.535347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-18 12:06:11.535462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.535517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-18 12:06:11.535687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.535723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-18 12:06:11.535863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.535897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-18 12:06:11.536035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.536069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-18 12:06:11.536180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.536215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-18 12:06:11.536365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.536413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-18 12:06:11.536543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.536578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-18 12:06:11.536734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.536781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-18 12:06:11.536950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.536989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-18 12:06:11.537187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.537226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-18 12:06:11.537368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.537403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-18 12:06:11.537555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.537590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-18 12:06:11.537753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.537788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-18 12:06:11.537925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.537978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-18 12:06:11.538125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.538164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-18 12:06:11.538320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.538359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-18 12:06:11.538497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.538552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-18 12:06:11.538689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.538723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-18 12:06:11.538929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-18 12:06:11.538995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.539170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-18 12:06:11.539224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.539365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-18 12:06:11.539412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.539555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-18 12:06:11.539591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.539750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-18 12:06:11.539804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.539963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-18 12:06:11.540016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.540263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-18 12:06:11.540321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.540447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-18 12:06:11.540485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.540632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-18 12:06:11.540666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.540846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-18 12:06:11.540884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.541047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-18 12:06:11.541112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.541293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-18 12:06:11.541331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.541455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-18 12:06:11.541496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.541632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-18 12:06:11.541666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.541823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-18 12:06:11.541891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.542145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-18 12:06:11.542186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.542335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-18 12:06:11.542372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.542560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-18 12:06:11.542595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.542725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-18 12:06:11.542768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.542884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-18 12:06:11.542934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.543067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-18 12:06:11.543106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.543255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-18 12:06:11.543291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.543420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-18 12:06:11.543466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.543592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-18 12:06:11.543626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.543757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-18 12:06:11.543796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.543914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-18 12:06:11.543952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.544070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-18 12:06:11.544109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.544300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-18 12:06:11.544367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.544558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-18 12:06:11.544595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.544702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-18 12:06:11.544738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.544897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-18 12:06:11.544949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.545101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-18 12:06:11.545152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.545326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-18 12:06:11.545360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.545477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-18 12:06:11.545524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.545695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-18 12:06:11.545729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.545844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-18 12:06:11.545877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.546057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-18 12:06:11.546116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.546229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-18 12:06:11.546266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-18 12:06:11.546396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.546432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-18 12:06:11.546597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.546637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-18 12:06:11.546809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.546862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-18 12:06:11.547045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.547105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-18 12:06:11.547309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.547369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-18 12:06:11.547534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.547568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-18 12:06:11.547713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.547747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-18 12:06:11.547978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.548017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-18 12:06:11.548274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.548333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-18 12:06:11.548532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.548566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-18 12:06:11.548674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.548707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-18 12:06:11.548865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.548914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-18 12:06:11.549046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.549084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-18 12:06:11.549269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.549340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-18 12:06:11.549463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.549508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-18 12:06:11.549672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.549720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-18 12:06:11.549870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.549906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-18 12:06:11.550020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.550055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-18 12:06:11.550264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.550302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-18 12:06:11.550454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.550501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-18 12:06:11.550656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.550696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-18 12:06:11.550886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.550941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-18 12:06:11.551048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.551082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-18 12:06:11.551281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.551334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-18 12:06:11.551506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.551549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-18 12:06:11.551658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.551693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-18 12:06:11.551881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.551919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-18 12:06:11.552039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.552077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-18 12:06:11.552325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.552379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-18 12:06:11.552538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.552575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-18 12:06:11.552728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.552781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-18 12:06:11.552952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.552989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-18 12:06:11.553248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.553305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-18 12:06:11.553451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.553498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-18 12:06:11.553663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.553697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-18 12:06:11.553948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.553986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-18 12:06:11.554169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.554208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-18 12:06:11.554383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-18 12:06:11.554418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.554598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-18 12:06:11.554635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.554783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-18 12:06:11.554831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.555000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-18 12:06:11.555052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.555215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-18 12:06:11.555253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.555410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-18 12:06:11.555445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.555585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-18 12:06:11.555619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.555725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-18 12:06:11.555759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.555866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-18 12:06:11.555900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.556035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-18 12:06:11.556070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.556206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-18 12:06:11.556244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.556383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-18 12:06:11.556434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.556556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-18 12:06:11.556604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.556753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-18 12:06:11.556789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.556924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-18 12:06:11.556958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.557093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-18 12:06:11.557127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.557238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-18 12:06:11.557293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.557432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-18 12:06:11.557466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.557588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-18 12:06:11.557624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.557777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-18 12:06:11.557811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.557956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-18 12:06:11.558015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.558150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-18 12:06:11.558194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.558345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-18 12:06:11.558380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.558549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-18 12:06:11.558589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.558703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-18 12:06:11.558737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.558851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-18 12:06:11.558911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.559139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-18 12:06:11.559194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.559378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-18 12:06:11.559419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.559589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-18 12:06:11.559625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.559749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-18 12:06:11.559786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.559945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-18 12:06:11.560003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.560169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-18 12:06:11.560226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.560383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-18 12:06:11.560416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.560576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-18 12:06:11.560625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.560733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-18 12:06:11.560768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.560955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-18 12:06:11.560993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.561211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-18 12:06:11.561250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.561434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-18 12:06:11.561475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-18 12:06:11.561618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-18 12:06:11.561653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-18 12:06:11.561761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-18 12:06:11.561797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-18 12:06:11.561913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-18 12:06:11.561947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-18 12:06:11.562090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-18 12:06:11.562144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-18 12:06:11.562288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-18 12:06:11.562325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-18 12:06:11.562428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-18 12:06:11.562465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-18 12:06:11.562615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-18 12:06:11.562649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-18 12:06:11.562807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-18 12:06:11.562841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-18 12:06:11.562974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-18 12:06:11.563012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-18 12:06:11.563127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-18 12:06:11.563164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-18 12:06:11.563314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-18 12:06:11.563355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-18 12:06:11.563518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-18 12:06:11.563572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-18 12:06:11.563737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-18 12:06:11.563776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-18 12:06:11.563954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-18 12:06:11.563991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-18 12:06:11.564158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-18 12:06:11.564214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-18 12:06:11.564390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-18 12:06:11.564429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-18 12:06:11.564561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-18 12:06:11.564613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-18 12:06:11.564751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-18 12:06:11.564805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-18 12:06:11.565061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-18 12:06:11.565119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-18 12:06:11.565332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-18 12:06:11.565388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-18 12:06:11.565519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-18 12:06:11.565570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-18 12:06:11.565755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-18 12:06:11.565803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-18 12:06:11.565924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-18 12:06:11.565960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-18 12:06:11.566156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-18 12:06:11.566215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-18 12:06:11.566347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-18 12:06:11.566382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-18 12:06:11.566513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-18 12:06:11.566568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-18 12:06:11.566687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-18 12:06:11.566724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-18 12:06:11.566999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-18 12:06:11.567061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-18 12:06:11.567267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-18 12:06:11.567324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-18 12:06:11.567458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-18 12:06:11.567499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-18 12:06:11.567634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-18 12:06:11.567686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-18 12:06:11.567881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-18 12:06:11.567951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-18 12:06:11.568088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-18 12:06:11.568128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-18 12:06:11.568326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-18 12:06:11.568381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.568527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.568580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.568694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.568728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.568885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.568922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.569140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.569178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.569334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.569372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.569542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.569579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.569749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.569785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.569902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.569956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.570162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.570220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.570375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.570413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.570565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.570601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.570731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.570767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.570987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.571025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.571152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.571189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.571301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.571339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.571527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.571562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.571671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.571705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.571825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.571863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.572064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.572102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.572217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.572254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.572425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.572463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.572619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.572654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.572779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.572832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.572995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.573032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.573206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.573243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.573367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.573405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.573586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.573623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.573747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.573795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.573941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.573994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.574147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.574201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.574360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.574395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.574544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.574609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.574726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.574762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.574872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.574906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.575036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.575070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.575174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.575207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.575372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.575409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-18 12:06:11.575520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-18 12:06:11.575555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.575690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-18 12:06:11.575725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.575899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-18 12:06:11.575935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.576065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-18 12:06:11.576120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.576251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-18 12:06:11.576285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.576445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-18 12:06:11.576479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.576623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-18 12:06:11.576658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.576807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-18 12:06:11.576854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.576991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-18 12:06:11.577031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.577145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-18 12:06:11.577183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.577358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-18 12:06:11.577396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.577557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-18 12:06:11.577592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.577759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-18 12:06:11.577799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.578059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-18 12:06:11.578118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.578245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-18 12:06:11.578278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.578425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-18 12:06:11.578459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.578631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-18 12:06:11.578680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.578810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-18 12:06:11.578847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.579050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-18 12:06:11.579121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.579303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-18 12:06:11.579340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.579486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-18 12:06:11.579548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.579664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-18 12:06:11.579698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.579945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-18 12:06:11.580001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.580214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-18 12:06:11.580272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.580418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-18 12:06:11.580452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.580563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-18 12:06:11.580598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.580747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-18 12:06:11.580796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.581008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-18 12:06:11.581074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.581308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-18 12:06:11.581345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.581479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-18 12:06:11.581522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.581664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-18 12:06:11.581699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.581873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-18 12:06:11.581925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.582192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-18 12:06:11.582250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.582413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-18 12:06:11.582464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.582610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-18 12:06:11.582651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.582837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-18 12:06:11.582890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.583043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-18 12:06:11.583095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.583251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-18 12:06:11.583303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-18 12:06:11.583437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.583504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-18 12:06:11.583713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.583766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-18 12:06:11.583933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.583992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-18 12:06:11.584182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.584222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-18 12:06:11.584401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.584439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-18 12:06:11.584602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.584641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-18 12:06:11.584850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.584908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-18 12:06:11.585123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.585161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-18 12:06:11.585285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.585319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-18 12:06:11.585462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.585505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-18 12:06:11.585669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.585703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-18 12:06:11.585852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.585906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-18 12:06:11.586186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.586246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-18 12:06:11.586368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.586402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-18 12:06:11.586537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.586572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-18 12:06:11.586703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.586755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-18 12:06:11.586907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.586960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-18 12:06:11.587148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.587200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-18 12:06:11.587336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.587369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-18 12:06:11.587506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.587547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-18 12:06:11.587696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.587731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-18 12:06:11.587867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.587900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-18 12:06:11.588050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.588098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-18 12:06:11.588246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.588282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-18 12:06:11.588405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.588453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-18 12:06:11.588592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.588628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-18 12:06:11.588781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.588834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-18 12:06:11.589022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.589086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-18 12:06:11.589247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.589281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-18 12:06:11.589389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.589423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-18 12:06:11.589600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.589640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-18 12:06:11.589848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.589902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-18 12:06:11.590087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.590164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-18 12:06:11.590389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.590425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-18 12:06:11.590555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.590590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-18 12:06:11.590730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.590763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-18 12:06:11.590950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.590994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-18 12:06:11.591172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-18 12:06:11.591210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-18 12:06:11.591362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-18 12:06:11.591397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-18 12:06:11.591535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-18 12:06:11.591569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-18 12:06:11.591737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-18 12:06:11.591804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-18 12:06:11.591937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-18 12:06:11.591976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-18 12:06:11.592239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-18 12:06:11.592277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-18 12:06:11.592428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-18 12:06:11.592466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-18 12:06:11.592607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-18 12:06:11.592641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-18 12:06:11.592802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-18 12:06:11.592836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-18 12:06:11.592999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-18 12:06:11.593100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-18 12:06:11.593300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-18 12:06:11.593359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-18 12:06:11.593522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-18 12:06:11.593556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-18 12:06:11.593702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-18 12:06:11.593736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-18 12:06:11.593914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-18 12:06:11.593982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-18 12:06:11.594108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-18 12:06:11.594159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-18 12:06:11.594327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-18 12:06:11.594381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-18 12:06:11.594507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-18 12:06:11.594559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-18 12:06:11.594702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-18 12:06:11.594735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-18 12:06:11.594897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-18 12:06:11.594967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-18 12:06:11.595140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-18 12:06:11.595176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-18 12:06:11.595297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-18 12:06:11.595347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-18 12:06:11.595475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-18 12:06:11.595541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-18 12:06:11.595727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-18 12:06:11.595776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-18 12:06:11.595948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-18 12:06:11.595984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-18 12:06:11.596144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-18 12:06:11.596180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-18 12:06:11.596335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-18 12:06:11.596386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-18 12:06:11.596578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-18 12:06:11.596626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-18 12:06:11.596766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-18 12:06:11.596814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-18 12:06:11.596982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-18 12:06:11.597017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-18 12:06:11.597180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-18 12:06:11.597218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-18 12:06:11.597418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-18 12:06:11.597456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-18 12:06:11.597639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-18 12:06:11.597688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-18 12:06:11.597829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-18 12:06:11.597865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-18 12:06:11.597996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-18 12:06:11.598035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-18 12:06:11.598207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-18 12:06:11.598261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-18 12:06:11.598404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-18 12:06:11.598442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-18 12:06:11.598643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-18 12:06:11.598679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-18 12:06:11.598806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-18 12:06:11.598874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-18 12:06:11.599111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-18 12:06:11.599165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-18 12:06:11.599341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-18 12:06:11.599386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-18 12:06:11.599540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-18 12:06:11.599575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-18 12:06:11.599715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-18 12:06:11.599750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-18 12:06:11.600028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-18 12:06:11.600066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-18 12:06:11.600306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-18 12:06:11.600365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-18 12:06:11.600540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-18 12:06:11.600574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-18 12:06:11.600715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-18 12:06:11.600748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-18 12:06:11.600887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-18 12:06:11.600938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-18 12:06:11.601121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-18 12:06:11.601158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-18 12:06:11.601275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-18 12:06:11.601312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-18 12:06:11.601460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-18 12:06:11.601506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-18 12:06:11.601660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-18 12:06:11.601693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-18 12:06:11.601809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-18 12:06:11.601857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-18 12:06:11.602018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-18 12:06:11.602072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-18 12:06:11.602267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-18 12:06:11.602320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-18 12:06:11.602482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-18 12:06:11.602545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-18 12:06:11.602688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-18 12:06:11.602741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-18 12:06:11.602894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-18 12:06:11.602946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-18 12:06:11.603097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-18 12:06:11.603145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-18 12:06:11.603300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-18 12:06:11.603335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-18 12:06:11.603450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-18 12:06:11.603485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-18 12:06:11.603607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-18 12:06:11.603641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-18 12:06:11.603757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-18 12:06:11.603792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-18 12:06:11.603912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-18 12:06:11.603949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-18 12:06:11.604102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-18 12:06:11.604140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-18 12:06:11.604292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-18 12:06:11.604331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-18 12:06:11.604497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-18 12:06:11.604534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-18 12:06:11.604655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-18 12:06:11.604689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-18 12:06:11.604886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-18 12:06:11.604938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-18 12:06:11.605120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-18 12:06:11.605181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-18 12:06:11.605315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-18 12:06:11.605349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-18 12:06:11.605478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-18 12:06:11.605530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-18 12:06:11.605637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-18 12:06:11.605671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-18 12:06:11.605804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-18 12:06:11.605841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-18 12:06:11.606031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-18 12:06:11.606070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-18 12:06:11.606277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-18 12:06:11.606337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-18 12:06:11.606447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-18 12:06:11.606485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-18 12:06:11.606652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-18 12:06:11.606686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-18 12:06:11.606872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-18 12:06:11.606910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-18 12:06:11.607063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-18 12:06:11.607113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-18 12:06:11.607255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-18 12:06:11.607302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-18 12:06:11.607450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-18 12:06:11.607508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-18 12:06:11.607618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-18 12:06:11.607652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-18 12:06:11.607764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-18 12:06:11.607799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-18 12:06:11.607903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-18 12:06:11.607937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-18 12:06:11.608060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-18 12:06:11.608098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-18 12:06:11.608276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-18 12:06:11.608331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-18 12:06:11.608477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-18 12:06:11.608519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-18 12:06:11.608656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-18 12:06:11.608691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-18 12:06:11.608883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-18 12:06:11.608918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-18 12:06:11.609050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-18 12:06:11.609088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-18 12:06:11.609229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-18 12:06:11.609266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-18 12:06:11.609398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-18 12:06:11.609432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-18 12:06:11.609580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-18 12:06:11.609614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-18 12:06:11.609734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-18 12:06:11.609768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-18 12:06:11.609921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-18 12:06:11.609959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-18 12:06:11.610102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-18 12:06:11.610140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-18 12:06:11.610281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-18 12:06:11.610318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-18 12:06:11.610462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-18 12:06:11.610518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-18 12:06:11.610696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-18 12:06:11.610733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-18 12:06:11.610892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-18 12:06:11.610945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-18 12:06:11.611110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-18 12:06:11.611163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-18 12:06:11.611324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-18 12:06:11.611370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-18 12:06:11.611485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-18 12:06:11.611537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-18 12:06:11.611692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-18 12:06:11.611740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-18 12:06:11.611884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-18 12:06:11.611920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-18 12:06:11.612023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-18 12:06:11.612057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-18 12:06:11.612277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-18 12:06:11.612311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-18 12:06:11.612471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-18 12:06:11.612512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-18 12:06:11.612625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-18 12:06:11.612659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-18 12:06:11.612795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-18 12:06:11.612828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-18 12:06:11.612966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-18 12:06:11.613000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-18 12:06:11.613162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-18 12:06:11.613199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-18 12:06:11.613310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-18 12:06:11.613347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-18 12:06:11.613526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-18 12:06:11.613563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-18 12:06:11.613708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-18 12:06:11.613742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-18 12:06:11.613896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-18 12:06:11.613934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-18 12:06:11.614052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-18 12:06:11.614090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-18 12:06:11.614244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-18 12:06:11.614282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-18 12:06:11.614411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-18 12:06:11.614448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-18 12:06:11.614620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-18 12:06:11.614662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-18 12:06:11.614774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-18 12:06:11.614809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-18 12:06:11.614940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-18 12:06:11.614980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-18 12:06:11.615206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-18 12:06:11.615244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-18 12:06:11.615417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-18 12:06:11.615455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-18 12:06:11.615634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-18 12:06:11.615668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-18 12:06:11.615777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-18 12:06:11.615811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-18 12:06:11.615943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-18 12:06:11.615995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-18 12:06:11.616151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-18 12:06:11.616213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-18 12:06:11.616377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-18 12:06:11.616415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-18 12:06:11.616589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-18 12:06:11.616625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-18 12:06:11.616787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-18 12:06:11.616838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-18 12:06:11.616972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-18 12:06:11.617006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-18 12:06:11.617192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-18 12:06:11.617226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-18 12:06:11.617370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-18 12:06:11.617405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-18 12:06:11.617571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-18 12:06:11.617626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-18 12:06:11.617796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-18 12:06:11.617835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-18 12:06:11.617953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-18 12:06:11.617991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-18 12:06:11.618110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-18 12:06:11.618148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-18 12:06:11.618297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-18 12:06:11.618335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-18 12:06:11.618501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-18 12:06:11.618544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-18 12:06:11.618786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.618824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-18 12:06:11.619043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.619081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-18 12:06:11.619258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.619295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-18 12:06:11.619475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.619519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-18 12:06:11.619648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.619681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-18 12:06:11.619929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.619967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-18 12:06:11.620124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.620194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-18 12:06:11.620399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.620436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-18 12:06:11.620597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.620635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-18 12:06:11.620796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.620833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-18 12:06:11.621007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.621044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-18 12:06:11.621209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.621246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-18 12:06:11.621380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.621415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-18 12:06:11.621581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.621615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-18 12:06:11.621729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.621786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-18 12:06:11.621939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.621972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-18 12:06:11.622107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.622159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-18 12:06:11.622343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.622379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-18 12:06:11.622556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.622591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-18 12:06:11.622704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.622749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-18 12:06:11.622880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.622931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-18 12:06:11.623105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.623159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-18 12:06:11.623349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.623389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-18 12:06:11.623526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.623562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-18 12:06:11.623739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.623790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-18 12:06:11.623978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.624016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-18 12:06:11.624179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.624230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-18 12:06:11.624452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.624497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-18 12:06:11.624630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.624664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-18 12:06:11.624789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.624823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-18 12:06:11.625068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.625106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-18 12:06:11.625361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.625399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-18 12:06:11.625504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.625559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-18 12:06:11.625708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.625743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-18 12:06:11.625931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.625968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-18 12:06:11.626102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.626153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-18 12:06:11.626299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.626337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-18 12:06:11.626479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-18 12:06:11.626524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-18 12:06:11.626700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-18 12:06:11.626734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-18 12:06:11.626900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-18 12:06:11.626934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-18 12:06:11.627076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-18 12:06:11.627128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-18 12:06:11.627303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-18 12:06:11.627340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-18 12:06:11.627513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-18 12:06:11.627567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-18 12:06:11.627707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-18 12:06:11.627740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-18 12:06:11.627875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-18 12:06:11.627908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-18 12:06:11.628094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-18 12:06:11.628135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-18 12:06:11.628260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-18 12:06:11.628298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-18 12:06:11.628452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-18 12:06:11.628487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-18 12:06:11.628669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-18 12:06:11.628703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-18 12:06:11.628872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-18 12:06:11.628907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-18 12:06:11.629034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-18 12:06:11.629067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-18 12:06:11.629255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-18 12:06:11.629292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-18 12:06:11.629404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-18 12:06:11.629441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-18 12:06:11.629625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-18 12:06:11.629660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-18 12:06:11.629838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-18 12:06:11.629876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-18 12:06:11.630071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-18 12:06:11.630109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-18 12:06:11.630262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-18 12:06:11.630299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-18 12:06:11.630446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-18 12:06:11.630482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-18 12:06:11.630646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-18 12:06:11.630681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-18 12:06:11.630792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-18 12:06:11.630830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-18 12:06:11.630957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-18 12:06:11.631007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-18 12:06:11.631209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-18 12:06:11.631245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-18 12:06:11.631389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-18 12:06:11.631426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-18 12:06:11.631577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-18 12:06:11.631611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-18 12:06:11.631779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-18 12:06:11.631833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-18 12:06:11.632066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-18 12:06:11.632105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-18 12:06:11.632252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-18 12:06:11.632289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-18 12:06:11.632470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-18 12:06:11.632512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-18 12:06:11.632664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-18 12:06:11.632698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-18 12:06:11.632835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-18 12:06:11.632870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-18 12:06:11.633000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-18 12:06:11.633038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-18 12:06:11.633169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-18 12:06:11.633204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-18 12:06:11.633343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-18 12:06:11.633376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-18 12:06:11.633510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-18 12:06:11.633554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-18 12:06:11.633705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-18 12:06:11.633742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-18 12:06:11.633924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-18 12:06:11.633958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-18 12:06:11.634114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-18 12:06:11.634152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-18 12:06:11.634304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-18 12:06:11.634342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-18 12:06:11.634481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-18 12:06:11.634544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-18 12:06:11.634688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-18 12:06:11.634723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-18 12:06:11.634902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-18 12:06:11.634939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-18 12:06:11.635113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-18 12:06:11.635147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-18 12:06:11.635278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-18 12:06:11.635312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-18 12:06:11.635486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-18 12:06:11.635540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-18 12:06:11.635686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-18 12:06:11.635720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-18 12:06:11.635916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-18 12:06:11.635950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-18 12:06:11.636060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-18 12:06:11.636093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-18 12:06:11.636246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-18 12:06:11.636281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-18 12:06:11.636441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-18 12:06:11.636476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-18 12:06:11.636596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-18 12:06:11.636634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-18 12:06:11.636772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-18 12:06:11.636824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-18 12:06:11.636965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-18 12:06:11.636999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-18 12:06:11.637126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-18 12:06:11.637160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-18 12:06:11.637294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-18 12:06:11.637332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-18 12:06:11.637516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-18 12:06:11.637556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-18 12:06:11.637712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-18 12:06:11.637746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-18 12:06:11.637906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-18 12:06:11.637939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-18 12:06:11.638073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-18 12:06:11.638111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-18 12:06:11.638226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-18 12:06:11.638263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-18 12:06:11.638386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-18 12:06:11.638424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-18 12:06:11.638560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-18 12:06:11.638594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-18 12:06:11.638747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-18 12:06:11.638784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-18 12:06:11.638896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-18 12:06:11.638934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-18 12:06:11.639071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-18 12:06:11.639105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-18 12:06:11.639233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-18 12:06:11.639267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-18 12:06:11.639424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-18 12:06:11.639462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-18 12:06:11.639633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-18 12:06:11.639673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-18 12:06:11.639788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.639823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-18 12:06:11.639939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.639975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-18 12:06:11.640137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.640175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-18 12:06:11.640346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.640383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-18 12:06:11.640505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.640546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-18 12:06:11.640685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.640718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-18 12:06:11.640907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.640945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-18 12:06:11.641126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.641165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-18 12:06:11.641343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.641377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-18 12:06:11.641523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.641565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-18 12:06:11.641693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.641731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-18 12:06:11.641914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.641981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-18 12:06:11.642139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.642173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-18 12:06:11.642305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.642338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-18 12:06:11.642463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.642508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-18 12:06:11.642686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.642720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-18 12:06:11.642855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.642889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-18 12:06:11.643045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.643082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-18 12:06:11.643256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.643294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-18 12:06:11.643438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.643480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-18 12:06:11.643645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.643680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-18 12:06:11.643813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.643865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-18 12:06:11.644056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.644091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-18 12:06:11.644270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.644309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-18 12:06:11.644469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.644512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-18 12:06:11.644635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.644668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-18 12:06:11.644796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.644830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-18 12:06:11.644996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.645030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-18 12:06:11.645157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.645192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-18 12:06:11.645307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.645357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-18 12:06:11.645508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.645553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-18 12:06:11.645677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.645714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-18 12:06:11.645868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.645902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-18 12:06:11.646090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.646128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-18 12:06:11.646305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.646342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-18 12:06:11.646484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.646540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-18 12:06:11.646710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.646743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-18 12:06:11.646920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-18 12:06:11.646957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.647104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-18 12:06:11.647142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.647341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-18 12:06:11.647379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.647575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-18 12:06:11.647618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.647779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-18 12:06:11.647815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.647942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-18 12:06:11.647980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.648140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-18 12:06:11.648178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.648341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-18 12:06:11.648395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.648584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-18 12:06:11.648618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.648733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-18 12:06:11.648768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.648905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-18 12:06:11.648938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.649066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-18 12:06:11.649100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.649229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-18 12:06:11.649282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.649443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-18 12:06:11.649481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.649641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-18 12:06:11.649678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.649839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-18 12:06:11.649873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.650012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-18 12:06:11.650046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.650178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-18 12:06:11.650212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.650348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-18 12:06:11.650381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.650500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-18 12:06:11.650534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.650665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-18 12:06:11.650716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.650861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-18 12:06:11.650899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.651046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-18 12:06:11.651088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.651222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-18 12:06:11.651257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.651391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-18 12:06:11.651424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.651557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-18 12:06:11.651591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.651715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-18 12:06:11.651752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.651913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-18 12:06:11.651947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.652092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-18 12:06:11.652127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.652262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-18 12:06:11.652297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.652452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-18 12:06:11.652523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.652658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-18 12:06:11.652692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.652827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-18 12:06:11.652878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.653029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-18 12:06:11.653067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.653207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-18 12:06:11.653245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.653452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-18 12:06:11.653498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.653639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-18 12:06:11.653673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.653798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-18 12:06:11.653836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-18 12:06:11.653982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.654019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-18 12:06:11.654170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.654205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-18 12:06:11.654361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.654396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-18 12:06:11.654587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.654625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-18 12:06:11.654747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.654785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-18 12:06:11.654932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.654967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-18 12:06:11.655128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.655161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-18 12:06:11.655292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.655325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-18 12:06:11.655542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.655615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-18 12:06:11.655770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.655804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-18 12:06:11.655935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.655984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-18 12:06:11.656139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.656176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-18 12:06:11.656327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.656364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-18 12:06:11.656529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.656564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-18 12:06:11.656696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.656730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-18 12:06:11.656834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.656868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-18 12:06:11.657001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.657034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-18 12:06:11.657129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.657163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-18 12:06:11.657324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.657358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-18 12:06:11.657544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.657582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-18 12:06:11.657707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.657744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-18 12:06:11.657903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.657937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-18 12:06:11.658071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.658105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-18 12:06:11.658233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.658270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-18 12:06:11.658434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.658472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-18 12:06:11.658611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.658645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-18 12:06:11.658780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.658832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-18 12:06:11.658985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.659023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-18 12:06:11.659182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.659221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-18 12:06:11.659376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.659410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-18 12:06:11.659586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.659625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-18 12:06:11.659761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.659799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-18 12:06:11.659953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.659991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-18 12:06:11.660174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.660207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-18 12:06:11.660313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.660347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-18 12:06:11.660466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.660523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-18 12:06:11.660655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.660689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-18 12:06:11.660825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-18 12:06:11.660859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-18 12:06:11.660999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-18 12:06:11.661034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-18 12:06:11.661170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-18 12:06:11.661208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-18 12:06:11.661386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-18 12:06:11.661438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-18 12:06:11.661624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-18 12:06:11.661659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-18 12:06:11.661794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-18 12:06:11.661847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-18 12:06:11.661968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-18 12:06:11.662019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-18 12:06:11.662149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-18 12:06:11.662183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-18 12:06:11.662293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-18 12:06:11.662327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-18 12:06:11.662515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-18 12:06:11.662553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-18 12:06:11.662672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-18 12:06:11.662711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-18 12:06:11.662909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-18 12:06:11.662942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-18 12:06:11.663071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-18 12:06:11.663104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-18 12:06:11.663204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-18 12:06:11.663238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-18 12:06:11.663336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-18 12:06:11.663370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-18 12:06:11.663530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-18 12:06:11.663564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-18 12:06:11.663715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-18 12:06:11.663749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-18 12:06:11.663917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-18 12:06:11.663952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-18 12:06:11.664121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-18 12:06:11.664155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-18 12:06:11.664284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-18 12:06:11.664318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-18 12:06:11.664507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-18 12:06:11.664542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-18 12:06:11.664718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-18 12:06:11.664755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-18 12:06:11.664897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-18 12:06:11.664935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-18 12:06:11.665053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-18 12:06:11.665090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-18 12:06:11.665213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-18 12:06:11.665246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-18 12:06:11.665382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-18 12:06:11.665416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-18 12:06:11.665558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-18 12:06:11.665593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-18 12:06:11.665729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-18 12:06:11.665777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-18 12:06:11.665960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-18 12:06:11.665994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-18 12:06:11.666099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-18 12:06:11.666133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-18 12:06:11.666271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-18 12:06:11.666305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-18 12:06:11.666457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-18 12:06:11.666501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-18 12:06:11.666631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-18 12:06:11.666665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-18 12:06:11.666825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-18 12:06:11.666858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-18 12:06:11.667017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-18 12:06:11.667054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-18 12:06:11.667170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-18 12:06:11.667207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-18 12:06:11.667440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-18 12:06:11.667477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.667620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.667654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.667820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.667869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.667998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.668036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.668190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.668224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.668360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.668393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.668577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.668614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.668723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.668761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.668884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.668917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.669079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.669112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.669215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.669248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.669418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.669455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.669618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.669652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.669762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.669810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.669960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.669997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.670106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.670143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.670288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.670322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.670460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.670500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.670608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.670642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.670799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.670836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.670989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.671023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.671158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.671210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.671364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.671401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.671549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.671587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.671749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.671783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.671946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.671996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.672140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.672177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.672295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.672332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.672480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.672529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.672661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.672695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.672849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.672886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.673031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.673086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.673251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.673285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.673451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.673488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.673622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.673655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.673797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.673834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.674021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.674063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.674209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.674247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.674354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.674391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.674580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.674614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.674725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-18 12:06:11.674758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-18 12:06:11.674870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.674904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.675016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.675050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.675208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.675241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.675402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.675435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.675584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.675618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.675730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.675763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.675929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.675962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.676061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.676094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.676238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.676271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.676418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.676456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.676625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.676662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.676788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.676822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.676927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.676961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.677111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.677147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.677308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.677342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.677447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.677480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.677628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.677661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.677822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.677860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.677982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.678019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.678168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.678202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.678379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.678417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.678599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.678635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.678748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.678799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.678976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.679009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.679154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.679189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.679360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.679411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.679543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.679577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.679676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.679710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.679821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.679854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.679953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.679987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.680135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.680180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.680300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.680335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.680470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.680513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.680675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.680708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.680846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.680880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.681012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.681046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.681198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.681235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.681359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.681396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.681539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.681577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.681737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.681771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-18 12:06:11.681872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-18 12:06:11.681906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.682033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.682067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.682263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.682297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.682444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.682478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.682603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.682637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.682811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.682848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.682963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.683000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.683179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.683213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.683372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.683409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.683558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.683592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.683757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.683790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.683961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.683994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.684180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.684217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.684389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.684426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.684587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.684621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.684755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.684789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.684950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.684983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.685134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.685171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.685315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.685353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.685534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.685568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.685693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.685726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.685850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.685888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.686044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.686078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.686233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.686267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.686370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.686422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.686555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.686589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.686697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.686731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.686866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.686899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.687028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.687080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.687192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.687230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.687401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.687443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.687580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.687614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.687754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.687788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.687884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.687917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.688082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.688116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.688228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.688261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.688392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.688424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.688562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.688601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.688706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.688740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.688865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.688898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.689003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.689037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.689194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.689232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.689401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.689438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.689581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-18 12:06:11.689615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-18 12:06:11.689764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.689798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.689953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.689990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.690095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.690132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.690294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.690328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.690443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.690475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.690630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.690663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.690798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.690832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.690939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.690973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.691105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.691139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.691272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.691309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.691416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.691454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.691617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.691652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.691790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.691842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.691963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.692015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.692129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.692162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.692265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.692298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.692470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.692513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.692677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.692711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.692810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.692844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.692956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.692989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.693110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.693144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.693286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.693322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.693510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.693548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.693730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.693764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.693945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.693982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.694102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.694153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.694293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.694332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.694466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.694507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.694624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.694658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.694766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.694807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.694907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.694941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.695075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.695109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.695210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.695244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.695376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.695412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.695554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.695588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.695694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.695727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.695845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.695878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.696002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.696035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.696147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-18 12:06:11.696181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-18 12:06:11.696313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.883 [2024-11-18 12:06:11.696347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.883 qpair failed and we were unable to recover it. 00:37:45.883 [2024-11-18 12:06:11.696466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.883 [2024-11-18 12:06:11.696505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.883 qpair failed and we were unable to recover it. 00:37:46.166 [2024-11-18 12:06:11.696650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-11-18 12:06:11.696683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-11-18 12:06:11.696789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-11-18 12:06:11.696822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-11-18 12:06:11.696961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-11-18 12:06:11.696994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-11-18 12:06:11.697102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-11-18 12:06:11.697135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-11-18 12:06:11.697237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-11-18 12:06:11.697270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-11-18 12:06:11.697389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-11-18 12:06:11.697427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-11-18 12:06:11.697560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-11-18 12:06:11.697594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-11-18 12:06:11.697768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-11-18 12:06:11.697822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-11-18 12:06:11.698014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-11-18 12:06:11.698047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-11-18 12:06:11.698153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-11-18 12:06:11.698186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-11-18 12:06:11.698290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-11-18 12:06:11.698325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-11-18 12:06:11.698448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-11-18 12:06:11.698482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-11-18 12:06:11.698617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-11-18 12:06:11.698652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-11-18 12:06:11.698817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-11-18 12:06:11.698854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.166 [2024-11-18 12:06:11.698977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.166 [2024-11-18 12:06:11.699010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.166 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.699167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-11-18 12:06:11.699201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.699303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-11-18 12:06:11.699336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.699448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-11-18 12:06:11.699482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.699600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-11-18 12:06:11.699634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.699735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-11-18 12:06:11.699775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.699929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-11-18 12:06:11.699967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.700114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-11-18 12:06:11.700152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.700263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-11-18 12:06:11.700300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.700445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-11-18 12:06:11.700483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.700657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-11-18 12:06:11.700691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.700825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-11-18 12:06:11.700863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.701024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-11-18 12:06:11.701059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.701224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-11-18 12:06:11.701257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.701392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-11-18 12:06:11.701426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.701565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-11-18 12:06:11.701600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.701716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-11-18 12:06:11.701750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.701875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-11-18 12:06:11.701909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.702013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-11-18 12:06:11.702066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.702192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-11-18 12:06:11.702228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.702389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-11-18 12:06:11.702423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.702598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-11-18 12:06:11.702632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.702764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-11-18 12:06:11.702797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.702989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-11-18 12:06:11.703032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.703193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-11-18 12:06:11.703230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.703393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-11-18 12:06:11.703427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.703570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-11-18 12:06:11.703603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.703731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-11-18 12:06:11.703766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.703906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-11-18 12:06:11.703939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.704043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-11-18 12:06:11.704076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.704213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-11-18 12:06:11.704246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.704397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-11-18 12:06:11.704430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.704560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-11-18 12:06:11.704594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.704722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-11-18 12:06:11.704755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.704894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-11-18 12:06:11.704931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.705076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-11-18 12:06:11.705113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.705251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-11-18 12:06:11.705304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.705444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.167 [2024-11-18 12:06:11.705481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.167 qpair failed and we were unable to recover it. 00:37:46.167 [2024-11-18 12:06:11.705661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-11-18 12:06:11.705695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-11-18 12:06:11.705831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-11-18 12:06:11.705865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-11-18 12:06:11.706000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-11-18 12:06:11.706034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-11-18 12:06:11.706200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-11-18 12:06:11.706234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-11-18 12:06:11.706339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-11-18 12:06:11.706372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-11-18 12:06:11.706576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-11-18 12:06:11.706611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-11-18 12:06:11.706750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-11-18 12:06:11.706783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-11-18 12:06:11.706918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-11-18 12:06:11.706971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-11-18 12:06:11.707141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-11-18 12:06:11.707178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-11-18 12:06:11.707334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-11-18 12:06:11.707371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-11-18 12:06:11.707503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-11-18 12:06:11.707547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-11-18 12:06:11.707676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-11-18 12:06:11.707710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-11-18 12:06:11.707879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-11-18 12:06:11.707913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-11-18 12:06:11.708069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-11-18 12:06:11.708107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-11-18 12:06:11.708309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-11-18 12:06:11.708346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-11-18 12:06:11.708488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-11-18 12:06:11.708546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-11-18 12:06:11.708686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-11-18 12:06:11.708720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-11-18 12:06:11.708837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-11-18 12:06:11.708870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-11-18 12:06:11.708980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-11-18 12:06:11.709014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-11-18 12:06:11.709189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-11-18 12:06:11.709224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-11-18 12:06:11.709401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-11-18 12:06:11.709434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-11-18 12:06:11.709562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-11-18 12:06:11.709596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-11-18 12:06:11.709756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-11-18 12:06:11.709790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-11-18 12:06:11.709897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-11-18 12:06:11.709930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-11-18 12:06:11.710028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-11-18 12:06:11.710061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-11-18 12:06:11.710217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-11-18 12:06:11.710255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-11-18 12:06:11.710460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-11-18 12:06:11.710524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-11-18 12:06:11.710685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-11-18 12:06:11.710718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-11-18 12:06:11.710895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-11-18 12:06:11.710961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-11-18 12:06:11.711111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-11-18 12:06:11.711149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-11-18 12:06:11.711332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-11-18 12:06:11.711366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-11-18 12:06:11.711534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-11-18 12:06:11.711573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-11-18 12:06:11.711698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-11-18 12:06:11.711731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-11-18 12:06:11.711873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-11-18 12:06:11.711907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-11-18 12:06:11.712072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-11-18 12:06:11.712106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-11-18 12:06:11.712211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-11-18 12:06:11.712244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-11-18 12:06:11.712372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.168 [2024-11-18 12:06:11.712405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.168 qpair failed and we were unable to recover it. 00:37:46.168 [2024-11-18 12:06:11.712578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.712617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-11-18 12:06:11.712753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.712786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-11-18 12:06:11.712917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.712951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-11-18 12:06:11.713094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.713128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-11-18 12:06:11.713289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.713326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-11-18 12:06:11.713457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.713498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-11-18 12:06:11.713612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.713646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-11-18 12:06:11.713785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.713818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-11-18 12:06:11.713950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.713987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-11-18 12:06:11.714113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.714148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-11-18 12:06:11.714255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.714288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-11-18 12:06:11.714429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.714462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-11-18 12:06:11.714544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:37:46.169 [2024-11-18 12:06:11.714707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.714755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-11-18 12:06:11.714878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.714914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-11-18 12:06:11.715100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.715153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-11-18 12:06:11.715312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.715350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-11-18 12:06:11.715520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.715585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-11-18 12:06:11.715721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.715754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-11-18 12:06:11.715942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.715980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-11-18 12:06:11.716105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.716143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-11-18 12:06:11.716282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.716335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-11-18 12:06:11.716478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.716529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-11-18 12:06:11.716665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.716700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-11-18 12:06:11.716830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.716882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-11-18 12:06:11.717009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.717044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-11-18 12:06:11.717174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.717208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-11-18 12:06:11.717332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.717365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-11-18 12:06:11.717474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.717515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-11-18 12:06:11.717657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.717690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-11-18 12:06:11.717815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.717856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-11-18 12:06:11.718036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.718073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-11-18 12:06:11.718219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.718257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-11-18 12:06:11.718410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.718443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-11-18 12:06:11.718598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.718632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-11-18 12:06:11.718738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.718788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-11-18 12:06:11.718998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.719035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-11-18 12:06:11.719176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.719213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.169 qpair failed and we were unable to recover it. 00:37:46.169 [2024-11-18 12:06:11.719363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.169 [2024-11-18 12:06:11.719399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.719529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-11-18 12:06:11.719571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.719701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-11-18 12:06:11.719735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.719906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-11-18 12:06:11.719943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.720082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-11-18 12:06:11.720118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.720262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-11-18 12:06:11.720299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.720433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-11-18 12:06:11.720467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.720603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-11-18 12:06:11.720637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.720808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-11-18 12:06:11.720864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.721027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-11-18 12:06:11.721066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.721269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-11-18 12:06:11.721316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.721447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-11-18 12:06:11.721480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.721632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-11-18 12:06:11.721666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.721793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-11-18 12:06:11.721830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.722062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-11-18 12:06:11.722100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.722261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-11-18 12:06:11.722298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.722438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-11-18 12:06:11.722488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.722661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-11-18 12:06:11.722698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.722883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-11-18 12:06:11.722936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.723126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-11-18 12:06:11.723179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.723325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-11-18 12:06:11.723386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.723516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-11-18 12:06:11.723566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.723714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-11-18 12:06:11.723759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.723928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-11-18 12:06:11.723965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.724102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-11-18 12:06:11.724139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.724270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-11-18 12:06:11.724324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.724459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-11-18 12:06:11.724503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.724654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-11-18 12:06:11.724688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.724920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-11-18 12:06:11.724971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.725135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-11-18 12:06:11.725233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.725337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-11-18 12:06:11.725372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.725547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-11-18 12:06:11.725586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.725732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-11-18 12:06:11.725781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.725905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-11-18 12:06:11.725941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.726097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-11-18 12:06:11.726135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.726282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-11-18 12:06:11.726320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.726430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-11-18 12:06:11.726467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.726631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.170 [2024-11-18 12:06:11.726666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.170 qpair failed and we were unable to recover it. 00:37:46.170 [2024-11-18 12:06:11.726848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-11-18 12:06:11.726906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-11-18 12:06:11.727057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-11-18 12:06:11.727108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-11-18 12:06:11.727208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-11-18 12:06:11.727242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-11-18 12:06:11.727366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-11-18 12:06:11.727400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-11-18 12:06:11.727548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-11-18 12:06:11.727583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-11-18 12:06:11.727720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-11-18 12:06:11.727755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-11-18 12:06:11.727863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-11-18 12:06:11.727896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-11-18 12:06:11.728006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-11-18 12:06:11.728039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-11-18 12:06:11.728207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-11-18 12:06:11.728241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-11-18 12:06:11.728419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-11-18 12:06:11.728467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-11-18 12:06:11.728633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-11-18 12:06:11.728668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-11-18 12:06:11.728811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-11-18 12:06:11.728849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-11-18 12:06:11.728991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-11-18 12:06:11.729029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-11-18 12:06:11.729191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-11-18 12:06:11.729229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-11-18 12:06:11.729349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-11-18 12:06:11.729383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-11-18 12:06:11.729497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-11-18 12:06:11.729542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-11-18 12:06:11.729685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-11-18 12:06:11.729720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-11-18 12:06:11.729930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-11-18 12:06:11.729968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-11-18 12:06:11.730105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-11-18 12:06:11.730142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-11-18 12:06:11.730260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-11-18 12:06:11.730297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-11-18 12:06:11.730451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-11-18 12:06:11.730485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-11-18 12:06:11.730652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-11-18 12:06:11.730687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-11-18 12:06:11.730832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-11-18 12:06:11.730882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-11-18 12:06:11.731090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-11-18 12:06:11.731128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-11-18 12:06:11.731280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-11-18 12:06:11.731318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-11-18 12:06:11.731438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-11-18 12:06:11.731475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-11-18 12:06:11.731617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-11-18 12:06:11.731651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-11-18 12:06:11.731842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-11-18 12:06:11.731895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-11-18 12:06:11.732031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-11-18 12:06:11.732071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-11-18 12:06:11.732229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-11-18 12:06:11.732269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-11-18 12:06:11.732387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-11-18 12:06:11.732438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.171 [2024-11-18 12:06:11.732590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.171 [2024-11-18 12:06:11.732624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.171 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.732778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-11-18 12:06:11.732826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.732984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-11-18 12:06:11.733024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.733191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-11-18 12:06:11.733253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.733394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-11-18 12:06:11.733429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.733548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-11-18 12:06:11.733583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.733750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-11-18 12:06:11.733786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.733895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-11-18 12:06:11.733931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.734067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-11-18 12:06:11.734101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.734210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-11-18 12:06:11.734243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.734371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-11-18 12:06:11.734405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.734503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-11-18 12:06:11.734543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.734676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-11-18 12:06:11.734709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.734878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-11-18 12:06:11.734916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.735081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-11-18 12:06:11.735134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.735272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-11-18 12:06:11.735312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.735471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-11-18 12:06:11.735519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.735661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-11-18 12:06:11.735694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.735877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-11-18 12:06:11.735914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.736108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-11-18 12:06:11.736146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.736319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-11-18 12:06:11.736356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.736509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-11-18 12:06:11.736561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.736690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-11-18 12:06:11.736724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.736845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-11-18 12:06:11.736882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.737023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-11-18 12:06:11.737061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.737204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-11-18 12:06:11.737241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.737384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-11-18 12:06:11.737421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.737613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-11-18 12:06:11.737647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.737761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-11-18 12:06:11.737795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.737929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-11-18 12:06:11.737982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.738137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-11-18 12:06:11.738174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.738343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-11-18 12:06:11.738381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.738556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-11-18 12:06:11.738603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.738744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-11-18 12:06:11.738800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.738998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-11-18 12:06:11.739035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.739187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-11-18 12:06:11.739225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.739399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-11-18 12:06:11.739436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.739603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.172 [2024-11-18 12:06:11.739638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.172 qpair failed and we were unable to recover it. 00:37:46.172 [2024-11-18 12:06:11.739756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.739790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-11-18 12:06:11.739919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.739953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-11-18 12:06:11.740081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.740119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-11-18 12:06:11.740260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.740297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-11-18 12:06:11.740440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.740477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-11-18 12:06:11.740695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.740750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-11-18 12:06:11.740935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.740992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-11-18 12:06:11.741135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.741172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-11-18 12:06:11.741291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.741330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-11-18 12:06:11.741524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.741558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-11-18 12:06:11.741673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.741707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-11-18 12:06:11.741838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.741876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-11-18 12:06:11.742064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.742128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-11-18 12:06:11.742274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.742313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-11-18 12:06:11.742497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.742552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-11-18 12:06:11.742674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.742709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-11-18 12:06:11.742868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.742921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-11-18 12:06:11.743075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.743125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-11-18 12:06:11.743351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.743388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-11-18 12:06:11.743587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.743622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-11-18 12:06:11.743730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.743780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-11-18 12:06:11.743959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.743996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-11-18 12:06:11.744141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.744179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-11-18 12:06:11.744320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.744370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-11-18 12:06:11.744525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.744559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-11-18 12:06:11.744719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.744752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-11-18 12:06:11.744959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.745024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-11-18 12:06:11.745280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.745337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-11-18 12:06:11.745477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.745516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-11-18 12:06:11.745620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.745653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-11-18 12:06:11.745822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.745854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-11-18 12:06:11.746006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.746043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-11-18 12:06:11.746242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.746298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-11-18 12:06:11.746453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.746487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-11-18 12:06:11.746647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.746682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-11-18 12:06:11.746784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.746837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-11-18 12:06:11.747012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.747072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.173 [2024-11-18 12:06:11.747242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.173 [2024-11-18 12:06:11.747279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.173 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.747398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.747437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.747605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.747640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.747818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.747855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.748008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.748059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.748354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.748412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.748564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.748598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.748711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.748745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.748938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.748981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.749134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.749172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.749296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.749335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.749501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.749536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.749641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.749675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.749823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.749861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.749995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.750046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.750166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.750203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.750315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.750352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.750518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.750553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.750689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.750723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.750896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.750929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.751115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.751152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.751298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.751336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.751526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.751561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.751668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.751702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.751875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.751924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.752068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.752105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.752235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.752288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.752406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.752455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.752602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.752636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.752769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.752803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.752905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.752956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.753105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.753143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.753309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.753347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.753464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.753510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.753667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.753700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.753854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.753902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.754063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.754122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.754333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.754386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.174 qpair failed and we were unable to recover it. 00:37:46.174 [2024-11-18 12:06:11.754601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.174 [2024-11-18 12:06:11.754637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-11-18 12:06:11.754764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-11-18 12:06:11.754797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-11-18 12:06:11.755013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-11-18 12:06:11.755051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-11-18 12:06:11.755229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-11-18 12:06:11.755267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-11-18 12:06:11.755378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-11-18 12:06:11.755416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-11-18 12:06:11.755601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-11-18 12:06:11.755649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-11-18 12:06:11.755789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-11-18 12:06:11.755842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-11-18 12:06:11.755996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-11-18 12:06:11.756048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-11-18 12:06:11.756229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-11-18 12:06:11.756280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-11-18 12:06:11.756408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-11-18 12:06:11.756456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-11-18 12:06:11.756582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-11-18 12:06:11.756628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-11-18 12:06:11.756780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-11-18 12:06:11.756818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-11-18 12:06:11.757105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-11-18 12:06:11.757143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-11-18 12:06:11.757298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-11-18 12:06:11.757354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-11-18 12:06:11.757540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-11-18 12:06:11.757593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-11-18 12:06:11.757702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-11-18 12:06:11.757738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-11-18 12:06:11.757888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-11-18 12:06:11.757939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-11-18 12:06:11.758074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-11-18 12:06:11.758126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-11-18 12:06:11.758262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-11-18 12:06:11.758297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-11-18 12:06:11.758458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-11-18 12:06:11.758509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-11-18 12:06:11.758648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-11-18 12:06:11.758683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-11-18 12:06:11.758816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-11-18 12:06:11.758850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-11-18 12:06:11.758985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-11-18 12:06:11.759019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-11-18 12:06:11.759120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-11-18 12:06:11.759155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-11-18 12:06:11.759293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-11-18 12:06:11.759326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-11-18 12:06:11.759460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-11-18 12:06:11.759500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-11-18 12:06:11.759632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-11-18 12:06:11.759666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-11-18 12:06:11.759802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-11-18 12:06:11.759838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-11-18 12:06:11.759971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-11-18 12:06:11.760005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-11-18 12:06:11.760175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-11-18 12:06:11.760208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-11-18 12:06:11.760341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-11-18 12:06:11.760374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-11-18 12:06:11.760475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-11-18 12:06:11.760534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.175 [2024-11-18 12:06:11.760683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.175 [2024-11-18 12:06:11.760720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.175 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.760861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-11-18 12:06:11.760899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.761044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-11-18 12:06:11.761081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.761263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-11-18 12:06:11.761300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.761426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-11-18 12:06:11.761459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.761636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-11-18 12:06:11.761675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.761821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-11-18 12:06:11.761858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.762032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-11-18 12:06:11.762069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.762213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-11-18 12:06:11.762251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.762434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-11-18 12:06:11.762467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.762619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-11-18 12:06:11.762654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.762806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-11-18 12:06:11.762860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.763042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-11-18 12:06:11.763094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.763209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-11-18 12:06:11.763243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.763380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-11-18 12:06:11.763415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.763575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-11-18 12:06:11.763630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.763763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-11-18 12:06:11.763797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.763963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-11-18 12:06:11.763997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.764169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-11-18 12:06:11.764210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.764386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-11-18 12:06:11.764431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.764599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-11-18 12:06:11.764651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.764773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-11-18 12:06:11.764821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.765010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-11-18 12:06:11.765052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.765192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-11-18 12:06:11.765226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.765364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-11-18 12:06:11.765399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.765536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-11-18 12:06:11.765571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.765712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-11-18 12:06:11.765745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.765894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-11-18 12:06:11.765931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.766103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-11-18 12:06:11.766140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.766285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-11-18 12:06:11.766323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.766442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-11-18 12:06:11.766477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.766605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-11-18 12:06:11.766653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.766819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-11-18 12:06:11.766859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.767091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-11-18 12:06:11.767149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.767394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-11-18 12:06:11.767431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.767596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-11-18 12:06:11.767630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.767766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-11-18 12:06:11.767799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.768000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.176 [2024-11-18 12:06:11.768037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.176 qpair failed and we were unable to recover it. 00:37:46.176 [2024-11-18 12:06:11.768162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.768201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-11-18 12:06:11.768348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.768383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-11-18 12:06:11.768517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.768566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-11-18 12:06:11.768686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.768723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-11-18 12:06:11.768913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.768966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-11-18 12:06:11.769185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.769242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-11-18 12:06:11.769371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.769405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-11-18 12:06:11.769539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.769575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-11-18 12:06:11.769686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.769720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-11-18 12:06:11.769882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.769919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-11-18 12:06:11.770090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.770127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-11-18 12:06:11.770240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.770277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-11-18 12:06:11.770440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.770477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-11-18 12:06:11.770661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.770699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-11-18 12:06:11.770859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.770898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-11-18 12:06:11.771022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.771072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-11-18 12:06:11.771190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.771227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-11-18 12:06:11.771350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.771388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-11-18 12:06:11.771546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.771580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-11-18 12:06:11.771710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.771747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-11-18 12:06:11.771908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.771967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-11-18 12:06:11.772130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.772169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-11-18 12:06:11.772328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.772370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-11-18 12:06:11.772554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.772590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-11-18 12:06:11.772718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.772775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-11-18 12:06:11.772925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.772980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-11-18 12:06:11.773149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.773184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-11-18 12:06:11.773314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.773347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-11-18 12:06:11.773488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.773528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-11-18 12:06:11.773689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.773722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-11-18 12:06:11.773828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.773862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-11-18 12:06:11.774024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.774058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-11-18 12:06:11.774170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.774205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-11-18 12:06:11.774343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.774378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-11-18 12:06:11.774513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.774548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-11-18 12:06:11.774682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.774716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-11-18 12:06:11.774853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.774888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.177 [2024-11-18 12:06:11.775033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.177 [2024-11-18 12:06:11.775068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.177 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.775201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.775235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.775334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.775368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.775540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.775575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.775707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.775741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.775875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.775909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.776016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.776051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.776199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.776234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.776334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.776368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.776544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.776592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.776752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.776804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.776957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.777009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.777108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.777142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.777251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.777285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.777397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.777432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.777597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.777633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.777791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.777839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.777981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.778017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.778179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.778214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.778339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.778372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.778526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.778561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.778726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.778764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.778913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.778950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.779073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.779116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.779239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.779275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.779406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.779441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.779589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.779623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.779751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.779803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.779975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.780011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.780164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.780201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.780343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.780382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.780574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.780609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.780744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.780778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.780877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.780911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.781071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.781137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.781331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.781384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.781516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.781552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.781711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.781763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.178 qpair failed and we were unable to recover it. 00:37:46.178 [2024-11-18 12:06:11.781909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.178 [2024-11-18 12:06:11.781963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.782112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-11-18 12:06:11.782165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.782271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-11-18 12:06:11.782305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.782408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-11-18 12:06:11.782443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.782600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-11-18 12:06:11.782638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.782815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-11-18 12:06:11.782852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.782995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-11-18 12:06:11.783032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.783179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-11-18 12:06:11.783217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.783343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-11-18 12:06:11.783377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.783504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-11-18 12:06:11.783539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.783650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-11-18 12:06:11.783683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.783837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-11-18 12:06:11.783874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.783998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-11-18 12:06:11.784041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.784233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-11-18 12:06:11.784285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.784453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-11-18 12:06:11.784488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.784656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-11-18 12:06:11.784691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.784842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-11-18 12:06:11.784894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.785039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-11-18 12:06:11.785092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.785251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-11-18 12:06:11.785300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.785418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-11-18 12:06:11.785452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.785616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-11-18 12:06:11.785665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.785782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-11-18 12:06:11.785817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.785980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-11-18 12:06:11.786014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.786151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-11-18 12:06:11.786184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.786323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-11-18 12:06:11.786359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.786547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-11-18 12:06:11.786601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.786769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-11-18 12:06:11.786808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.787035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-11-18 12:06:11.787097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.787278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-11-18 12:06:11.787315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.787423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-11-18 12:06:11.787460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.787597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-11-18 12:06:11.787631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.787778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-11-18 12:06:11.787816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.787966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-11-18 12:06:11.788029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.788172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-11-18 12:06:11.788246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.788369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-11-18 12:06:11.788405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.788525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-11-18 12:06:11.788560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.788712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-11-18 12:06:11.788763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.788983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.179 [2024-11-18 12:06:11.789079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.179 qpair failed and we were unable to recover it. 00:37:46.179 [2024-11-18 12:06:11.789281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-11-18 12:06:11.789334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-11-18 12:06:11.789477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-11-18 12:06:11.789519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-11-18 12:06:11.789657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-11-18 12:06:11.789691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-11-18 12:06:11.789843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-11-18 12:06:11.789880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-11-18 12:06:11.790134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-11-18 12:06:11.790172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-11-18 12:06:11.790332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-11-18 12:06:11.790369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-11-18 12:06:11.790548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-11-18 12:06:11.790582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-11-18 12:06:11.790737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-11-18 12:06:11.790786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-11-18 12:06:11.790947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-11-18 12:06:11.791001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-11-18 12:06:11.791135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-11-18 12:06:11.791191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-11-18 12:06:11.791323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-11-18 12:06:11.791358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-11-18 12:06:11.791499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-11-18 12:06:11.791534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-11-18 12:06:11.791667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-11-18 12:06:11.791701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-11-18 12:06:11.791839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-11-18 12:06:11.791874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-11-18 12:06:11.792038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-11-18 12:06:11.792076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-11-18 12:06:11.792208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-11-18 12:06:11.792241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-11-18 12:06:11.792349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-11-18 12:06:11.792382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-11-18 12:06:11.792579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-11-18 12:06:11.792633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-11-18 12:06:11.792878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-11-18 12:06:11.792932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-11-18 12:06:11.793149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-11-18 12:06:11.793210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-11-18 12:06:11.793333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-11-18 12:06:11.793383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-11-18 12:06:11.793528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-11-18 12:06:11.793564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-11-18 12:06:11.793815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-11-18 12:06:11.793878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-11-18 12:06:11.794079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-11-18 12:06:11.794135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-11-18 12:06:11.794282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-11-18 12:06:11.794320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-11-18 12:06:11.794452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-11-18 12:06:11.794487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-11-18 12:06:11.794662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-11-18 12:06:11.794696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-11-18 12:06:11.794824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-11-18 12:06:11.794898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-11-18 12:06:11.795157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-11-18 12:06:11.795197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-11-18 12:06:11.795323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-11-18 12:06:11.795361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-11-18 12:06:11.795514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-11-18 12:06:11.795567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-11-18 12:06:11.795719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-11-18 12:06:11.795767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-11-18 12:06:11.795994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.180 [2024-11-18 12:06:11.796054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.180 qpair failed and we were unable to recover it. 00:37:46.180 [2024-11-18 12:06:11.796257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.796322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-11-18 12:06:11.796488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.796548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-11-18 12:06:11.796702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.796749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-11-18 12:06:11.796913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.796952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-11-18 12:06:11.797156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.797223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-11-18 12:06:11.797384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.797418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-11-18 12:06:11.797552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.797587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-11-18 12:06:11.797721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.797756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-11-18 12:06:11.798038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.798107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-11-18 12:06:11.798308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.798368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-11-18 12:06:11.798520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.798571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-11-18 12:06:11.798671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.798705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-11-18 12:06:11.798854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.798943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-11-18 12:06:11.799207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.799266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-11-18 12:06:11.799432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.799467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-11-18 12:06:11.799622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.799658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-11-18 12:06:11.799819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.799853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-11-18 12:06:11.800125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.800199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-11-18 12:06:11.800461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.800527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-11-18 12:06:11.800697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.800732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-11-18 12:06:11.800867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.800921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-11-18 12:06:11.801056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.801100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-11-18 12:06:11.801370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.801423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-11-18 12:06:11.801587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.801623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-11-18 12:06:11.801782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.801817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-11-18 12:06:11.801949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.801998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-11-18 12:06:11.802103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.802137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-11-18 12:06:11.802305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.802339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-11-18 12:06:11.802478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.802524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-11-18 12:06:11.802712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.802761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-11-18 12:06:11.802912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.802948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-11-18 12:06:11.803053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.803088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-11-18 12:06:11.803226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.803262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-11-18 12:06:11.803391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.803425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-11-18 12:06:11.803537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.803572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-11-18 12:06:11.803713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.803747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-11-18 12:06:11.803876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.803909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.181 [2024-11-18 12:06:11.804037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.181 [2024-11-18 12:06:11.804071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.181 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.804202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-11-18 12:06:11.804236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.804351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-11-18 12:06:11.804385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.804543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-11-18 12:06:11.804592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.804762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-11-18 12:06:11.804799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.805015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-11-18 12:06:11.805049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.805191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-11-18 12:06:11.805226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.805362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-11-18 12:06:11.805396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.805515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-11-18 12:06:11.805550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.805684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-11-18 12:06:11.805718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.805839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-11-18 12:06:11.805874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.806040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-11-18 12:06:11.806075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.806233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-11-18 12:06:11.806288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.806423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-11-18 12:06:11.806458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.806631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-11-18 12:06:11.806683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.806824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-11-18 12:06:11.806877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.806998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-11-18 12:06:11.807046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.807191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-11-18 12:06:11.807228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.807363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-11-18 12:06:11.807396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.807559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-11-18 12:06:11.807595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.807698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-11-18 12:06:11.807732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.807864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-11-18 12:06:11.807898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.808000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-11-18 12:06:11.808035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.808169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-11-18 12:06:11.808204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.808366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-11-18 12:06:11.808406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.808538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-11-18 12:06:11.808574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.808715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-11-18 12:06:11.808749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.808864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-11-18 12:06:11.808897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.809052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-11-18 12:06:11.809089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.809299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-11-18 12:06:11.809352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.809525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-11-18 12:06:11.809579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.809718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-11-18 12:06:11.809753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.809902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-11-18 12:06:11.809955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.810137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-11-18 12:06:11.810214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.810382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-11-18 12:06:11.810416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.810581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-11-18 12:06:11.810622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.810745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-11-18 12:06:11.810786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.811011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.182 [2024-11-18 12:06:11.811046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.182 qpair failed and we were unable to recover it. 00:37:46.182 [2024-11-18 12:06:11.811178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.811229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-11-18 12:06:11.811363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.811396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-11-18 12:06:11.811534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.811568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-11-18 12:06:11.811698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.811751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-11-18 12:06:11.811876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.811915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-11-18 12:06:11.812036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.812071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-11-18 12:06:11.812210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.812245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-11-18 12:06:11.812411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.812446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-11-18 12:06:11.812615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.812667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-11-18 12:06:11.812867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.812925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-11-18 12:06:11.813177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.813214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-11-18 12:06:11.813362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.813399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-11-18 12:06:11.813587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.813622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-11-18 12:06:11.813762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.813799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-11-18 12:06:11.813945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.813982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-11-18 12:06:11.814239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.814298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-11-18 12:06:11.814417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.814451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-11-18 12:06:11.814593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.814628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-11-18 12:06:11.814756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.814794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-11-18 12:06:11.814991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.815049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-11-18 12:06:11.815301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.815362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-11-18 12:06:11.815498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.815551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-11-18 12:06:11.815680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.815717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-11-18 12:06:11.815861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.815900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-11-18 12:06:11.816036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.816074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-11-18 12:06:11.816221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.816261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-11-18 12:06:11.816441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.816481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-11-18 12:06:11.816636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.816689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-11-18 12:06:11.816883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.816935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-11-18 12:06:11.817145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.817209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-11-18 12:06:11.817312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.817346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-11-18 12:06:11.817480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.817521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-11-18 12:06:11.817678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.817716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-11-18 12:06:11.817997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.818053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-11-18 12:06:11.818215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.818249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-11-18 12:06:11.818359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.818393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-11-18 12:06:11.818535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.818589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.183 [2024-11-18 12:06:11.818774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.183 [2024-11-18 12:06:11.818827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.183 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.819030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.819070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.819321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.819379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.819547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.819582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.819690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.819743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.819883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.819920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.820073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.820111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.820286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.820323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.820482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.820543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.820695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.820747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.820898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.820950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.821083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.821135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.821298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.821332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.821441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.821475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.821623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.821664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.821848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.821896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.822050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.822087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.822253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.822287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.822411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.822445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.822605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.822641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.822802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.822836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.822935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.822968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.823129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.823163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.823331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.823364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.823473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.823514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.823656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.823714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.823875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.823935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.824137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.824187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.824320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.824354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.824505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.824553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.824709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.824761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.824913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.824966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.825092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.825131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.825310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.825345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.825478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.825535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.825662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.825697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.825823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.825862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.826013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.826046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.184 qpair failed and we were unable to recover it. 00:37:46.184 [2024-11-18 12:06:11.826155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.184 [2024-11-18 12:06:11.826189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-11-18 12:06:11.826317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-11-18 12:06:11.826351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-11-18 12:06:11.826510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-11-18 12:06:11.826555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-11-18 12:06:11.826661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-11-18 12:06:11.826724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-11-18 12:06:11.826854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-11-18 12:06:11.826888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-11-18 12:06:11.827027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-11-18 12:06:11.827061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-11-18 12:06:11.827224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-11-18 12:06:11.827263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-11-18 12:06:11.827395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-11-18 12:06:11.827430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-11-18 12:06:11.827584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-11-18 12:06:11.827618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-11-18 12:06:11.827722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-11-18 12:06:11.827767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-11-18 12:06:11.827928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-11-18 12:06:11.827962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-11-18 12:06:11.828098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-11-18 12:06:11.828132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-11-18 12:06:11.828242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-11-18 12:06:11.828276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-11-18 12:06:11.828409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-11-18 12:06:11.828444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-11-18 12:06:11.828604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-11-18 12:06:11.828652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-11-18 12:06:11.828826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-11-18 12:06:11.828862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-11-18 12:06:11.829025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-11-18 12:06:11.829059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-11-18 12:06:11.829203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-11-18 12:06:11.829237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-11-18 12:06:11.829373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-11-18 12:06:11.829407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-11-18 12:06:11.829583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-11-18 12:06:11.829621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-11-18 12:06:11.829738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-11-18 12:06:11.829784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-11-18 12:06:11.829962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-11-18 12:06:11.830000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-11-18 12:06:11.830145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-11-18 12:06:11.830183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-11-18 12:06:11.830339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-11-18 12:06:11.830371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-11-18 12:06:11.830506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-11-18 12:06:11.830554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-11-18 12:06:11.830659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-11-18 12:06:11.830694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-11-18 12:06:11.830813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-11-18 12:06:11.830851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-11-18 12:06:11.830988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-11-18 12:06:11.831064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-11-18 12:06:11.831218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-11-18 12:06:11.831288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-11-18 12:06:11.831439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-11-18 12:06:11.831472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-11-18 12:06:11.831644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-11-18 12:06:11.831678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-11-18 12:06:11.831901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.185 [2024-11-18 12:06:11.831963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.185 qpair failed and we were unable to recover it. 00:37:46.185 [2024-11-18 12:06:11.832160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.832201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-11-18 12:06:11.832353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.832393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-11-18 12:06:11.832558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.832594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-11-18 12:06:11.832703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.832737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-11-18 12:06:11.832869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.832932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-11-18 12:06:11.833059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.833098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-11-18 12:06:11.833300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.833338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-11-18 12:06:11.833464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.833529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-11-18 12:06:11.833707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.833742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-11-18 12:06:11.833908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.833945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-11-18 12:06:11.834138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.834196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-11-18 12:06:11.834337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.834374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-11-18 12:06:11.834562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.834610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-11-18 12:06:11.834783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.834818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-11-18 12:06:11.834944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.834983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-11-18 12:06:11.835157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.835195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-11-18 12:06:11.835330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.835383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-11-18 12:06:11.835548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.835582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-11-18 12:06:11.835719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.835756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-11-18 12:06:11.835899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.835995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-11-18 12:06:11.836229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.836296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-11-18 12:06:11.836442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.836479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-11-18 12:06:11.836658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.836706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-11-18 12:06:11.836844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.836900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-11-18 12:06:11.837055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.837109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-11-18 12:06:11.837240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.837293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-11-18 12:06:11.837411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.837446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-11-18 12:06:11.837592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.837640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-11-18 12:06:11.837785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.837820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-11-18 12:06:11.837985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.838018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-11-18 12:06:11.838131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.838164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-11-18 12:06:11.838309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.838343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-11-18 12:06:11.838475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.838518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-11-18 12:06:11.838634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.838668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-11-18 12:06:11.838839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.838876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-11-18 12:06:11.838995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.839043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-11-18 12:06:11.839262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.839319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.186 qpair failed and we were unable to recover it. 00:37:46.186 [2024-11-18 12:06:11.839456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.186 [2024-11-18 12:06:11.839496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.839632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-11-18 12:06:11.839667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.839861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-11-18 12:06:11.839939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.840070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-11-18 12:06:11.840104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.840231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-11-18 12:06:11.840265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.840405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-11-18 12:06:11.840444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.840567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-11-18 12:06:11.840602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.840772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-11-18 12:06:11.840808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.840971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-11-18 12:06:11.841008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.841178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-11-18 12:06:11.841215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.841392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-11-18 12:06:11.841457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.841611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-11-18 12:06:11.841646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.841793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-11-18 12:06:11.841827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.842091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-11-18 12:06:11.842126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.842274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-11-18 12:06:11.842328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.842486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-11-18 12:06:11.842527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.842657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-11-18 12:06:11.842698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.842870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-11-18 12:06:11.842923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.843076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-11-18 12:06:11.843115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.843295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-11-18 12:06:11.843363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.843501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-11-18 12:06:11.843536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.843680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-11-18 12:06:11.843714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.843878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-11-18 12:06:11.843916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.844092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-11-18 12:06:11.844129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.844272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-11-18 12:06:11.844309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.844474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-11-18 12:06:11.844516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.844631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-11-18 12:06:11.844665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.844798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-11-18 12:06:11.844850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.844995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-11-18 12:06:11.845032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.845240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-11-18 12:06:11.845277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.845448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-11-18 12:06:11.845511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.845677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-11-18 12:06:11.845724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.845916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-11-18 12:06:11.845956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.846217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-11-18 12:06:11.846277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.846468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-11-18 12:06:11.846509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.846635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-11-18 12:06:11.846671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.846828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-11-18 12:06:11.846867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.846989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.187 [2024-11-18 12:06:11.847026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.187 qpair failed and we were unable to recover it. 00:37:46.187 [2024-11-18 12:06:11.847228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.847265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-11-18 12:06:11.847397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.847450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-11-18 12:06:11.847623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.847672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-11-18 12:06:11.847850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.847886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-11-18 12:06:11.848047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.848087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-11-18 12:06:11.848375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.848438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-11-18 12:06:11.848604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.848647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-11-18 12:06:11.848799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.848833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-11-18 12:06:11.849066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.849123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-11-18 12:06:11.849284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.849321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-11-18 12:06:11.849503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.849560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-11-18 12:06:11.849683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.849731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-11-18 12:06:11.849847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.849883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-11-18 12:06:11.849996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.850031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-11-18 12:06:11.850166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.850203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-11-18 12:06:11.850330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.850382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-11-18 12:06:11.850500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.850561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-11-18 12:06:11.850725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.850769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-11-18 12:06:11.850942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.850995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-11-18 12:06:11.851153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.851191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-11-18 12:06:11.851368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.851404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-11-18 12:06:11.851560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.851595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-11-18 12:06:11.851730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.851782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-11-18 12:06:11.851976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.852017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-11-18 12:06:11.852246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.852285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-11-18 12:06:11.852441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.852480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-11-18 12:06:11.852626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.852660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-11-18 12:06:11.852824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.852857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-11-18 12:06:11.852996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.853029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-11-18 12:06:11.853163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.853215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-11-18 12:06:11.853386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.853423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-11-18 12:06:11.853566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.853600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-11-18 12:06:11.853719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.853754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-11-18 12:06:11.853930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.853964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-11-18 12:06:11.854094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.854131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-11-18 12:06:11.854307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.854344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-11-18 12:06:11.854469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.854512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.188 [2024-11-18 12:06:11.854663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.188 [2024-11-18 12:06:11.854701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.188 qpair failed and we were unable to recover it. 00:37:46.189 [2024-11-18 12:06:11.854846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-11-18 12:06:11.854884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-11-18 12:06:11.855061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-11-18 12:06:11.855098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-11-18 12:06:11.855272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-11-18 12:06:11.855309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-11-18 12:06:11.855470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-11-18 12:06:11.855516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-11-18 12:06:11.855670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-11-18 12:06:11.855703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-11-18 12:06:11.855838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-11-18 12:06:11.855872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-11-18 12:06:11.856028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-11-18 12:06:11.856071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-11-18 12:06:11.856340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-11-18 12:06:11.856396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-11-18 12:06:11.856526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-11-18 12:06:11.856571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-11-18 12:06:11.856737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-11-18 12:06:11.856788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-11-18 12:06:11.856925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-11-18 12:06:11.856975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-11-18 12:06:11.857180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-11-18 12:06:11.857239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-11-18 12:06:11.857408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-11-18 12:06:11.857446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-11-18 12:06:11.857623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-11-18 12:06:11.857657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-11-18 12:06:11.857810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-11-18 12:06:11.857862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-11-18 12:06:11.858087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-11-18 12:06:11.858147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-11-18 12:06:11.858283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-11-18 12:06:11.858316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-11-18 12:06:11.858447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-11-18 12:06:11.858481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-11-18 12:06:11.858633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-11-18 12:06:11.858667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-11-18 12:06:11.858765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-11-18 12:06:11.858799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-11-18 12:06:11.858973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-11-18 12:06:11.859027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-11-18 12:06:11.859273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-11-18 12:06:11.859327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-11-18 12:06:11.859504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-11-18 12:06:11.859542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-11-18 12:06:11.859666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-11-18 12:06:11.859700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-11-18 12:06:11.859813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-11-18 12:06:11.859846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-11-18 12:06:11.860034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-11-18 12:06:11.860083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-11-18 12:06:11.860266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-11-18 12:06:11.860306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-11-18 12:06:11.860467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-11-18 12:06:11.860529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-11-18 12:06:11.860682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-11-18 12:06:11.860720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-11-18 12:06:11.860870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-11-18 12:06:11.860908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-11-18 12:06:11.861053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-11-18 12:06:11.861090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-11-18 12:06:11.861252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-11-18 12:06:11.861293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-11-18 12:06:11.861508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.189 [2024-11-18 12:06:11.861575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.189 qpair failed and we were unable to recover it. 00:37:46.189 [2024-11-18 12:06:11.861744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.861804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-11-18 12:06:11.861990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.862059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-11-18 12:06:11.862259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.862310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-11-18 12:06:11.862472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.862515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-11-18 12:06:11.862663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.862696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-11-18 12:06:11.862875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.862912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-11-18 12:06:11.863078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.863115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-11-18 12:06:11.863261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.863300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-11-18 12:06:11.863452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.863514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-11-18 12:06:11.863698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.863746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-11-18 12:06:11.863879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.863933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-11-18 12:06:11.864088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.864140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-11-18 12:06:11.864274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.864308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-11-18 12:06:11.864424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.864464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-11-18 12:06:11.864669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.864716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-11-18 12:06:11.864836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.864871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-11-18 12:06:11.865033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.865067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-11-18 12:06:11.865200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.865234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-11-18 12:06:11.865370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.865404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-11-18 12:06:11.865552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.865589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-11-18 12:06:11.865751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.865804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-11-18 12:06:11.865954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.866008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-11-18 12:06:11.866166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.866218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-11-18 12:06:11.866397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.866445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-11-18 12:06:11.866614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.866675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-11-18 12:06:11.866864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.866905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-11-18 12:06:11.867059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.867137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-11-18 12:06:11.867340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.867374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-11-18 12:06:11.867539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.867573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-11-18 12:06:11.867685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.867737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-11-18 12:06:11.867881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.867919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-11-18 12:06:11.868096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.868134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-11-18 12:06:11.868282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.868318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-11-18 12:06:11.868434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.868473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-11-18 12:06:11.868694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.868742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-11-18 12:06:11.868895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.868950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-11-18 12:06:11.869093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.869173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.190 qpair failed and we were unable to recover it. 00:37:46.190 [2024-11-18 12:06:11.869325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.190 [2024-11-18 12:06:11.869375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.869517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-11-18 12:06:11.869552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.869653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-11-18 12:06:11.869687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.869848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-11-18 12:06:11.869887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.870031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-11-18 12:06:11.870069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.870244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-11-18 12:06:11.870282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.870459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-11-18 12:06:11.870503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.870714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-11-18 12:06:11.870786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.870997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-11-18 12:06:11.871049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.871175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-11-18 12:06:11.871214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.871366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-11-18 12:06:11.871404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.871594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-11-18 12:06:11.871628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.871785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-11-18 12:06:11.871852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.872060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-11-18 12:06:11.872124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.872278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-11-18 12:06:11.872376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.872564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-11-18 12:06:11.872599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.872734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-11-18 12:06:11.872790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.872999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-11-18 12:06:11.873060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.873203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-11-18 12:06:11.873240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.873412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-11-18 12:06:11.873449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.873593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-11-18 12:06:11.873627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.873739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-11-18 12:06:11.873787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.873925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-11-18 12:06:11.873961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.874098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-11-18 12:06:11.874149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.874354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-11-18 12:06:11.874413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.874600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-11-18 12:06:11.874648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.874797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-11-18 12:06:11.874853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.875068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-11-18 12:06:11.875125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.875350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-11-18 12:06:11.875409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.875609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-11-18 12:06:11.875645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.875781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-11-18 12:06:11.875847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.876062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-11-18 12:06:11.876121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.876382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-11-18 12:06:11.876439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.876608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-11-18 12:06:11.876642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.876767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-11-18 12:06:11.876814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.877010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-11-18 12:06:11.877084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.877208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-11-18 12:06:11.877246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.877420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.191 [2024-11-18 12:06:11.877454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.191 qpair failed and we were unable to recover it. 00:37:46.191 [2024-11-18 12:06:11.877592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.877626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-11-18 12:06:11.877728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.877762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-11-18 12:06:11.878042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.878099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-11-18 12:06:11.878292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.878329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-11-18 12:06:11.878471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.878548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-11-18 12:06:11.878692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.878727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-11-18 12:06:11.878881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.878915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-11-18 12:06:11.879126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.879175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-11-18 12:06:11.879317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.879353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-11-18 12:06:11.879571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.879605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-11-18 12:06:11.879750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.879802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-11-18 12:06:11.879986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.880023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-11-18 12:06:11.880194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.880231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-11-18 12:06:11.880372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.880410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-11-18 12:06:11.880579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.880628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-11-18 12:06:11.880773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.880809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-11-18 12:06:11.880965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.881002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-11-18 12:06:11.881137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.881170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-11-18 12:06:11.881307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.881352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-11-18 12:06:11.881566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.881615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-11-18 12:06:11.881755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.881810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-11-18 12:06:11.881938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.881989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-11-18 12:06:11.882104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.882141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-11-18 12:06:11.882287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.882324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-11-18 12:06:11.882497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.882547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-11-18 12:06:11.882670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.882706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-11-18 12:06:11.882874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.882925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-11-18 12:06:11.883027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.883062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-11-18 12:06:11.883192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.883233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-11-18 12:06:11.883391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.883425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-11-18 12:06:11.883592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.883641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-11-18 12:06:11.883831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.883885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-11-18 12:06:11.884039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.884096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-11-18 12:06:11.884230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.884283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-11-18 12:06:11.884442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.884476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-11-18 12:06:11.884603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.884637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-11-18 12:06:11.884785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.884837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.192 [2024-11-18 12:06:11.884953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.192 [2024-11-18 12:06:11.884987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.192 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.885148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.193 [2024-11-18 12:06:11.885182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.193 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.885287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.193 [2024-11-18 12:06:11.885321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.193 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.885461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.193 [2024-11-18 12:06:11.885500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.193 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.885652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.193 [2024-11-18 12:06:11.885699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.193 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.885842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.193 [2024-11-18 12:06:11.885878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.193 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.886019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.193 [2024-11-18 12:06:11.886072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.193 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.886227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.193 [2024-11-18 12:06:11.886280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.193 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.886398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.193 [2024-11-18 12:06:11.886451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.193 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.886586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.193 [2024-11-18 12:06:11.886622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.193 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.886781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.193 [2024-11-18 12:06:11.886836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.193 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.887059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.193 [2024-11-18 12:06:11.887093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.193 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.887200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.193 [2024-11-18 12:06:11.887235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.193 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.887369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.193 [2024-11-18 12:06:11.887403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.193 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.887510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.193 [2024-11-18 12:06:11.887545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.193 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.887660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.193 [2024-11-18 12:06:11.887713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.193 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.887880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.193 [2024-11-18 12:06:11.887914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.193 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.888015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.193 [2024-11-18 12:06:11.888049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.193 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.888185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.193 [2024-11-18 12:06:11.888219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.193 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.888364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.193 [2024-11-18 12:06:11.888399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.193 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.888550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.193 [2024-11-18 12:06:11.888603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.193 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.888740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.193 [2024-11-18 12:06:11.888779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.193 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.889001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.193 [2024-11-18 12:06:11.889059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.193 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.889174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.193 [2024-11-18 12:06:11.889211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.193 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.889342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.193 [2024-11-18 12:06:11.889396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.193 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.889588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.193 [2024-11-18 12:06:11.889624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.193 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.889798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.193 [2024-11-18 12:06:11.889851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.193 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.890044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.193 [2024-11-18 12:06:11.890112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.193 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.890237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.193 [2024-11-18 12:06:11.890275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.193 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.890429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.193 [2024-11-18 12:06:11.890464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.193 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.890624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.193 [2024-11-18 12:06:11.890660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.193 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.890833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.193 [2024-11-18 12:06:11.890871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.193 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.891100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.193 [2024-11-18 12:06:11.891139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.193 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.891266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.193 [2024-11-18 12:06:11.891304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.193 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.891468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.193 [2024-11-18 12:06:11.891529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.193 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.891705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.193 [2024-11-18 12:06:11.891765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.193 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.892042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.193 [2024-11-18 12:06:11.892106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.193 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.892316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.193 [2024-11-18 12:06:11.892353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.193 qpair failed and we were unable to recover it. 00:37:46.193 [2024-11-18 12:06:11.892507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.194 [2024-11-18 12:06:11.892560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.194 qpair failed and we were unable to recover it. 00:37:46.194 [2024-11-18 12:06:11.892692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.194 [2024-11-18 12:06:11.892725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.194 qpair failed and we were unable to recover it. 00:37:46.194 [2024-11-18 12:06:11.892856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.194 [2024-11-18 12:06:11.892907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.194 qpair failed and we were unable to recover it. 00:37:46.194 [2024-11-18 12:06:11.893082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.194 [2024-11-18 12:06:11.893120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.194 qpair failed and we were unable to recover it. 00:37:46.194 [2024-11-18 12:06:11.893329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.194 [2024-11-18 12:06:11.893385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.194 qpair failed and we were unable to recover it. 00:37:46.194 [2024-11-18 12:06:11.893526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.194 [2024-11-18 12:06:11.893562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.194 qpair failed and we were unable to recover it. 00:37:46.194 [2024-11-18 12:06:11.893761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.194 [2024-11-18 12:06:11.893814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.194 qpair failed and we were unable to recover it. 00:37:46.194 [2024-11-18 12:06:11.893970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.194 [2024-11-18 12:06:11.894011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.194 qpair failed and we were unable to recover it. 00:37:46.194 [2024-11-18 12:06:11.894119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.194 [2024-11-18 12:06:11.894156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.194 qpair failed and we were unable to recover it. 00:37:46.194 [2024-11-18 12:06:11.894329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.194 [2024-11-18 12:06:11.894367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.194 qpair failed and we were unable to recover it. 00:37:46.194 [2024-11-18 12:06:11.894560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.194 [2024-11-18 12:06:11.894601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.194 qpair failed and we were unable to recover it. 00:37:46.194 [2024-11-18 12:06:11.894713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.194 [2024-11-18 12:06:11.894747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.194 qpair failed and we were unable to recover it. 00:37:46.194 [2024-11-18 12:06:11.894900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.194 [2024-11-18 12:06:11.894937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.194 qpair failed and we were unable to recover it. 00:37:46.194 [2024-11-18 12:06:11.895404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.194 [2024-11-18 12:06:11.895446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.194 qpair failed and we were unable to recover it. 00:37:46.194 [2024-11-18 12:06:11.895643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.194 [2024-11-18 12:06:11.895678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.194 qpair failed and we were unable to recover it. 00:37:46.194 [2024-11-18 12:06:11.895882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.194 [2024-11-18 12:06:11.895942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.194 qpair failed and we were unable to recover it. 00:37:46.194 [2024-11-18 12:06:11.896092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.194 [2024-11-18 12:06:11.896131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.194 qpair failed and we were unable to recover it. 00:37:46.194 [2024-11-18 12:06:11.896261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.194 [2024-11-18 12:06:11.896299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.194 qpair failed and we were unable to recover it. 00:37:46.194 [2024-11-18 12:06:11.896474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.194 [2024-11-18 12:06:11.896539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.194 qpair failed and we were unable to recover it. 00:37:46.194 [2024-11-18 12:06:11.896697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.194 [2024-11-18 12:06:11.896746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.194 qpair failed and we were unable to recover it. 00:37:46.194 [2024-11-18 12:06:11.896936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.194 [2024-11-18 12:06:11.896976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.194 qpair failed and we were unable to recover it. 00:37:46.194 [2024-11-18 12:06:11.897097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.194 [2024-11-18 12:06:11.897135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.194 qpair failed and we were unable to recover it. 00:37:46.194 [2024-11-18 12:06:11.897334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.194 [2024-11-18 12:06:11.897372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.194 qpair failed and we were unable to recover it. 00:37:46.194 [2024-11-18 12:06:11.897488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.194 [2024-11-18 12:06:11.897555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.194 qpair failed and we were unable to recover it. 00:37:46.194 [2024-11-18 12:06:11.897703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.194 [2024-11-18 12:06:11.897737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.194 qpair failed and we were unable to recover it. 00:37:46.194 [2024-11-18 12:06:11.897910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.194 [2024-11-18 12:06:11.897947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.194 qpair failed and we were unable to recover it. 00:37:46.194 [2024-11-18 12:06:11.898085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.194 [2024-11-18 12:06:11.898138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.194 qpair failed and we were unable to recover it. 00:37:46.194 [2024-11-18 12:06:11.898325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.194 [2024-11-18 12:06:11.898365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.194 qpair failed and we were unable to recover it. 00:37:46.194 [2024-11-18 12:06:11.898548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.194 [2024-11-18 12:06:11.898597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.194 qpair failed and we were unable to recover it. 00:37:46.194 [2024-11-18 12:06:11.898745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.194 [2024-11-18 12:06:11.898783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.194 qpair failed and we were unable to recover it. 00:37:46.194 [2024-11-18 12:06:11.898891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.194 [2024-11-18 12:06:11.898927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.194 qpair failed and we were unable to recover it. 00:37:46.194 [2024-11-18 12:06:11.899034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.194 [2024-11-18 12:06:11.899070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.194 qpair failed and we were unable to recover it. 00:37:46.194 [2024-11-18 12:06:11.899223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.194 [2024-11-18 12:06:11.899258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.194 qpair failed and we were unable to recover it. 00:37:46.194 [2024-11-18 12:06:11.899367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.899401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.195 [2024-11-18 12:06:11.899510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.899545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.195 [2024-11-18 12:06:11.899715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.899758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.195 [2024-11-18 12:06:11.899899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.899952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.195 [2024-11-18 12:06:11.900116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.900156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.195 [2024-11-18 12:06:11.900339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.900373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.195 [2024-11-18 12:06:11.900472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.900512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.195 [2024-11-18 12:06:11.900687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.900739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.195 [2024-11-18 12:06:11.900849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.900884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.195 [2024-11-18 12:06:11.901038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.901076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.195 [2024-11-18 12:06:11.901226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.901259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.195 [2024-11-18 12:06:11.901367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.901400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.195 [2024-11-18 12:06:11.901542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.901577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.195 [2024-11-18 12:06:11.901684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.901718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.195 [2024-11-18 12:06:11.901861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.901896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.195 [2024-11-18 12:06:11.902045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.902081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.195 [2024-11-18 12:06:11.902217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.902251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.195 [2024-11-18 12:06:11.902392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.902431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.195 [2024-11-18 12:06:11.902575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.902610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.195 [2024-11-18 12:06:11.902751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.902784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.195 [2024-11-18 12:06:11.902937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.902984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.195 [2024-11-18 12:06:11.903149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.903202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.195 [2024-11-18 12:06:11.903365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.903398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.195 [2024-11-18 12:06:11.903538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.903572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.195 [2024-11-18 12:06:11.903708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.903759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.195 [2024-11-18 12:06:11.903888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.903925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.195 [2024-11-18 12:06:11.904096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.904132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.195 [2024-11-18 12:06:11.904314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.904355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.195 [2024-11-18 12:06:11.904467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.904527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.195 [2024-11-18 12:06:11.904665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.904702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.195 [2024-11-18 12:06:11.904924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.904961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.195 [2024-11-18 12:06:11.905185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.905223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.195 [2024-11-18 12:06:11.905384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.905418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.195 [2024-11-18 12:06:11.905579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.905614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.195 [2024-11-18 12:06:11.905767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.905822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.195 [2024-11-18 12:06:11.905993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.906054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.195 [2024-11-18 12:06:11.906255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.906311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.195 [2024-11-18 12:06:11.906442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.195 [2024-11-18 12:06:11.906476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.195 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.906625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.196 [2024-11-18 12:06:11.906659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.196 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.906768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.196 [2024-11-18 12:06:11.906801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.196 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.906958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.196 [2024-11-18 12:06:11.907009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.196 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.907131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.196 [2024-11-18 12:06:11.907191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.196 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.907339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.196 [2024-11-18 12:06:11.907374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.196 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.907517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.196 [2024-11-18 12:06:11.907562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.196 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.907685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.196 [2024-11-18 12:06:11.907719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.196 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.907852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.196 [2024-11-18 12:06:11.907887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.196 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.907996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.196 [2024-11-18 12:06:11.908030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.196 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.908171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.196 [2024-11-18 12:06:11.908205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.196 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.908346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.196 [2024-11-18 12:06:11.908381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.196 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.908502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.196 [2024-11-18 12:06:11.908538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.196 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.908659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.196 [2024-11-18 12:06:11.908693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.196 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.908868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.196 [2024-11-18 12:06:11.908903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.196 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.909040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.196 [2024-11-18 12:06:11.909085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.196 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.909220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.196 [2024-11-18 12:06:11.909254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.196 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.909402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.196 [2024-11-18 12:06:11.909450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.196 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.909609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.196 [2024-11-18 12:06:11.909644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.196 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.909747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.196 [2024-11-18 12:06:11.909781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.196 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.909945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.196 [2024-11-18 12:06:11.910008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.196 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.910115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.196 [2024-11-18 12:06:11.910149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.196 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.910325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.196 [2024-11-18 12:06:11.910374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.196 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.910519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.196 [2024-11-18 12:06:11.910560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.196 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.910674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.196 [2024-11-18 12:06:11.910708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.196 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.910832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.196 [2024-11-18 12:06:11.910869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.196 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.911024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.196 [2024-11-18 12:06:11.911061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.196 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.911251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.196 [2024-11-18 12:06:11.911324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.196 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.911482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.196 [2024-11-18 12:06:11.911523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.196 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.911762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.196 [2024-11-18 12:06:11.911811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.196 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.911962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.196 [2024-11-18 12:06:11.911997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.196 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.912216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.196 [2024-11-18 12:06:11.912273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.196 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.912433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.196 [2024-11-18 12:06:11.912466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.196 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.912587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.196 [2024-11-18 12:06:11.912621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.196 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.912807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.196 [2024-11-18 12:06:11.912845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.196 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.913007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.196 [2024-11-18 12:06:11.913044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.196 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.913205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.196 [2024-11-18 12:06:11.913262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.196 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.913422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.196 [2024-11-18 12:06:11.913456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.196 qpair failed and we were unable to recover it. 00:37:46.196 [2024-11-18 12:06:11.913590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.913639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.197 qpair failed and we were unable to recover it. 00:37:46.197 [2024-11-18 12:06:11.913766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.913815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.197 qpair failed and we were unable to recover it. 00:37:46.197 [2024-11-18 12:06:11.913967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.914020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.197 qpair failed and we were unable to recover it. 00:37:46.197 [2024-11-18 12:06:11.914185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.914218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.197 qpair failed and we were unable to recover it. 00:37:46.197 [2024-11-18 12:06:11.914377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.914414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.197 qpair failed and we were unable to recover it. 00:37:46.197 [2024-11-18 12:06:11.914583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.914618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.197 qpair failed and we were unable to recover it. 00:37:46.197 [2024-11-18 12:06:11.914755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.914808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.197 qpair failed and we were unable to recover it. 00:37:46.197 [2024-11-18 12:06:11.914984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.915021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.197 qpair failed and we were unable to recover it. 00:37:46.197 [2024-11-18 12:06:11.915143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.915182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.197 qpair failed and we were unable to recover it. 00:37:46.197 [2024-11-18 12:06:11.915360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.915397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.197 qpair failed and we were unable to recover it. 00:37:46.197 [2024-11-18 12:06:11.915547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.915595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.197 qpair failed and we were unable to recover it. 00:37:46.197 [2024-11-18 12:06:11.915730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.915778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.197 qpair failed and we were unable to recover it. 00:37:46.197 [2024-11-18 12:06:11.915925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.915961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.197 qpair failed and we were unable to recover it. 00:37:46.197 [2024-11-18 12:06:11.916154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.916211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.197 qpair failed and we were unable to recover it. 00:37:46.197 [2024-11-18 12:06:11.916359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.916411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.197 qpair failed and we were unable to recover it. 00:37:46.197 [2024-11-18 12:06:11.916583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.916618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.197 qpair failed and we were unable to recover it. 00:37:46.197 [2024-11-18 12:06:11.916730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.916780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.197 qpair failed and we were unable to recover it. 00:37:46.197 [2024-11-18 12:06:11.916998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.917035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.197 qpair failed and we were unable to recover it. 00:37:46.197 [2024-11-18 12:06:11.917178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.917215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.197 qpair failed and we were unable to recover it. 00:37:46.197 [2024-11-18 12:06:11.917333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.917370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.197 qpair failed and we were unable to recover it. 00:37:46.197 [2024-11-18 12:06:11.917544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.917579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.197 qpair failed and we were unable to recover it. 00:37:46.197 [2024-11-18 12:06:11.917689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.917723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.197 qpair failed and we were unable to recover it. 00:37:46.197 [2024-11-18 12:06:11.917876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.917915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.197 qpair failed and we were unable to recover it. 00:37:46.197 [2024-11-18 12:06:11.918109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.918146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.197 qpair failed and we were unable to recover it. 00:37:46.197 [2024-11-18 12:06:11.918309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.918363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.197 qpair failed and we were unable to recover it. 00:37:46.197 [2024-11-18 12:06:11.918523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.918569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.197 qpair failed and we were unable to recover it. 00:37:46.197 [2024-11-18 12:06:11.918728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.918766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.197 qpair failed and we were unable to recover it. 00:37:46.197 [2024-11-18 12:06:11.918945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.919003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.197 qpair failed and we were unable to recover it. 00:37:46.197 [2024-11-18 12:06:11.919190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.919247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.197 qpair failed and we were unable to recover it. 00:37:46.197 [2024-11-18 12:06:11.919391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.919428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.197 qpair failed and we were unable to recover it. 00:37:46.197 [2024-11-18 12:06:11.919582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.919616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.197 qpair failed and we were unable to recover it. 00:37:46.197 [2024-11-18 12:06:11.919761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.919814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.197 qpair failed and we were unable to recover it. 00:37:46.197 [2024-11-18 12:06:11.920016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.920052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.197 qpair failed and we were unable to recover it. 00:37:46.197 [2024-11-18 12:06:11.920188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.920237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.197 qpair failed and we were unable to recover it. 00:37:46.197 [2024-11-18 12:06:11.920380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.920418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.197 qpair failed and we were unable to recover it. 00:37:46.197 [2024-11-18 12:06:11.920563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.920599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.197 qpair failed and we were unable to recover it. 00:37:46.197 [2024-11-18 12:06:11.920737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.920788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.197 qpair failed and we were unable to recover it. 00:37:46.197 [2024-11-18 12:06:11.920930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.197 [2024-11-18 12:06:11.920967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.198 qpair failed and we were unable to recover it. 00:37:46.198 [2024-11-18 12:06:11.921131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.198 [2024-11-18 12:06:11.921167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.198 qpair failed and we were unable to recover it. 00:37:46.198 [2024-11-18 12:06:11.921288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.198 [2024-11-18 12:06:11.921321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.198 qpair failed and we were unable to recover it. 00:37:46.198 [2024-11-18 12:06:11.921451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.198 [2024-11-18 12:06:11.921510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.198 qpair failed and we were unable to recover it. 00:37:46.198 [2024-11-18 12:06:11.921658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.198 [2024-11-18 12:06:11.921690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.198 qpair failed and we were unable to recover it. 00:37:46.198 [2024-11-18 12:06:11.921845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.198 [2024-11-18 12:06:11.921882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.198 qpair failed and we were unable to recover it. 00:37:46.198 [2024-11-18 12:06:11.922038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.198 [2024-11-18 12:06:11.922089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.198 qpair failed and we were unable to recover it. 00:37:46.198 [2024-11-18 12:06:11.922222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.198 [2024-11-18 12:06:11.922259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.198 qpair failed and we were unable to recover it. 00:37:46.198 [2024-11-18 12:06:11.922403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.198 [2024-11-18 12:06:11.922440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.198 qpair failed and we were unable to recover it. 00:37:46.198 [2024-11-18 12:06:11.922629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.198 [2024-11-18 12:06:11.922677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.198 qpair failed and we were unable to recover it. 00:37:46.198 [2024-11-18 12:06:11.922883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.198 [2024-11-18 12:06:11.922923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.198 qpair failed and we were unable to recover it. 00:37:46.198 [2024-11-18 12:06:11.923051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.198 [2024-11-18 12:06:11.923089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.198 qpair failed and we were unable to recover it. 00:37:46.198 [2024-11-18 12:06:11.923244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.198 [2024-11-18 12:06:11.923283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.198 qpair failed and we were unable to recover it. 00:37:46.198 [2024-11-18 12:06:11.923429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.198 [2024-11-18 12:06:11.923477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.198 qpair failed and we were unable to recover it. 00:37:46.198 [2024-11-18 12:06:11.923631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.198 [2024-11-18 12:06:11.923666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.198 qpair failed and we were unable to recover it. 00:37:46.198 [2024-11-18 12:06:11.923823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.198 [2024-11-18 12:06:11.923875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.198 qpair failed and we were unable to recover it. 00:37:46.198 [2024-11-18 12:06:11.924054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.198 [2024-11-18 12:06:11.924088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.198 qpair failed and we were unable to recover it. 00:37:46.198 [2024-11-18 12:06:11.924249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.198 [2024-11-18 12:06:11.924283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.198 qpair failed and we were unable to recover it. 00:37:46.198 [2024-11-18 12:06:11.924391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.198 [2024-11-18 12:06:11.924426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.198 qpair failed and we were unable to recover it. 00:37:46.198 [2024-11-18 12:06:11.924642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.198 [2024-11-18 12:06:11.924676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.198 qpair failed and we were unable to recover it. 00:37:46.198 [2024-11-18 12:06:11.924810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.198 [2024-11-18 12:06:11.924844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.198 qpair failed and we were unable to recover it. 00:37:46.198 [2024-11-18 12:06:11.924982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.198 [2024-11-18 12:06:11.925016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.198 qpair failed and we were unable to recover it. 00:37:46.198 [2024-11-18 12:06:11.925218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.198 [2024-11-18 12:06:11.925274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.198 qpair failed and we were unable to recover it. 00:37:46.198 [2024-11-18 12:06:11.925390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.198 [2024-11-18 12:06:11.925446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.198 qpair failed and we were unable to recover it. 00:37:46.198 [2024-11-18 12:06:11.925596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.198 [2024-11-18 12:06:11.925630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.198 qpair failed and we were unable to recover it. 00:37:46.198 [2024-11-18 12:06:11.925750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.198 [2024-11-18 12:06:11.925829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.198 qpair failed and we were unable to recover it. 00:37:46.198 [2024-11-18 12:06:11.925957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.198 [2024-11-18 12:06:11.925996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.198 qpair failed and we were unable to recover it. 00:37:46.198 [2024-11-18 12:06:11.926201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.198 [2024-11-18 12:06:11.926239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.198 qpair failed and we were unable to recover it. 00:37:46.198 [2024-11-18 12:06:11.926380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.198 [2024-11-18 12:06:11.926431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.198 qpair failed and we were unable to recover it. 00:37:46.198 [2024-11-18 12:06:11.926564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.198 [2024-11-18 12:06:11.926613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.198 qpair failed and we were unable to recover it. 00:37:46.198 [2024-11-18 12:06:11.926759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.198 [2024-11-18 12:06:11.926815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.198 qpair failed and we were unable to recover it. 00:37:46.198 [2024-11-18 12:06:11.927031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.198 [2024-11-18 12:06:11.927070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.198 qpair failed and we were unable to recover it. 00:37:46.198 [2024-11-18 12:06:11.927235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.198 [2024-11-18 12:06:11.927333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.198 qpair failed and we were unable to recover it. 00:37:46.198 [2024-11-18 12:06:11.927481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.198 [2024-11-18 12:06:11.927536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.198 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.927710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-18 12:06:11.927757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.927873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-18 12:06:11.927909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.928192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-18 12:06:11.928249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.928409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-18 12:06:11.928443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.928591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-18 12:06:11.928627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.928816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-18 12:06:11.928864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.929019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-18 12:06:11.929087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.929226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-18 12:06:11.929283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.929422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-18 12:06:11.929457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.929590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-18 12:06:11.929627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.929765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-18 12:06:11.929799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.929954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-18 12:06:11.929990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.930248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-18 12:06:11.930286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.930448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-18 12:06:11.930486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.930656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-18 12:06:11.930693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.930832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-18 12:06:11.930871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.930981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-18 12:06:11.931018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.931226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-18 12:06:11.931260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.931412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-18 12:06:11.931446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.931561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-18 12:06:11.931595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.931747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-18 12:06:11.931808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.931965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-18 12:06:11.932026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.932199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-18 12:06:11.932257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.932437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-18 12:06:11.932476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.932648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-18 12:06:11.932685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.932812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-18 12:06:11.932866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.933022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-18 12:06:11.933074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.933173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-18 12:06:11.933208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.933384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-18 12:06:11.933432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.933577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-18 12:06:11.933625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.933781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-18 12:06:11.933821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.933933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-18 12:06:11.933977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.934210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-18 12:06:11.934268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.934417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-18 12:06:11.934457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.934657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-18 12:06:11.934711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.934834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-18 12:06:11.934888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.935011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-18 12:06:11.935065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-18 12:06:11.935196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.935230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-18 12:06:11.935353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.935401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-18 12:06:11.935575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.935642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-18 12:06:11.935758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.935794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-18 12:06:11.935925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.935958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-18 12:06:11.936098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.936133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-18 12:06:11.936274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.936309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-18 12:06:11.936438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.936471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-18 12:06:11.936596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.936630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-18 12:06:11.936749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.936815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-18 12:06:11.936968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.937008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-18 12:06:11.937143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.937176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-18 12:06:11.937364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.937401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-18 12:06:11.937534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.937568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-18 12:06:11.937701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.937740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-18 12:06:11.937882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.937919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-18 12:06:11.938108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.938175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-18 12:06:11.938296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.938333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-18 12:06:11.938484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.938541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-18 12:06:11.938658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.938715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-18 12:06:11.938884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.938969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-18 12:06:11.939160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.939224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-18 12:06:11.939344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.939382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-18 12:06:11.939539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.939573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-18 12:06:11.939736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.939775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-18 12:06:11.939926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.939963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-18 12:06:11.940114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.940152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-18 12:06:11.940292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.940328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-18 12:06:11.940535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.940583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-18 12:06:11.940765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.940833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-18 12:06:11.940975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.941029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-18 12:06:11.941250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.941288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-18 12:06:11.941439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.941497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-18 12:06:11.941629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.941662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-18 12:06:11.941765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.941821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-18 12:06:11.942014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.942072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-18 12:06:11.942215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.942253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-18 12:06:11.942406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-18 12:06:11.942445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.942634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.942669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.942863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.942900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.943164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.943225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.943376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.943413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.943588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.943637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.943765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.943800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.943959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.944001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.944214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.944287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.944420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.944471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.944616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.944650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.944859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.944933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.945139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.945240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.945409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.945443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.945620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.945655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.945806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.945843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.946019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.946057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.946209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.946249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.946411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.946444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.946599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.946633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.946745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.946797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.947042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.947080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.947234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.947272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.947402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.947441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.947647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.947696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.947859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.947908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.948105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.948165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.948357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.948415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.948585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.948620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.948776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.948839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.949088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.949154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.949438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.949510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.949672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.949706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.949845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.949879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.950122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.950178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.950331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.950368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.950520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.950568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.950689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.950732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-18 12:06:11.950873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-18 12:06:11.950926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.951125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-18 12:06:11.951183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.951311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-18 12:06:11.951364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.951563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-18 12:06:11.951600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.951707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-18 12:06:11.951743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.951912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-18 12:06:11.951963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.952139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-18 12:06:11.952176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.952321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-18 12:06:11.952359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.952498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-18 12:06:11.952532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.952660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-18 12:06:11.952693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.952825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-18 12:06:11.952858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.953016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-18 12:06:11.953053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.953204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-18 12:06:11.953243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.953399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-18 12:06:11.953450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.953600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-18 12:06:11.953648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.953812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-18 12:06:11.953851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.953997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-18 12:06:11.954034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.954178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-18 12:06:11.954215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.954405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-18 12:06:11.954442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.954632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-18 12:06:11.954681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.954848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-18 12:06:11.954886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.955147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-18 12:06:11.955204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.955348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-18 12:06:11.955385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.955543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-18 12:06:11.955579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.955738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-18 12:06:11.955776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.955992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-18 12:06:11.956101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.956291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-18 12:06:11.956346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.956483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-18 12:06:11.956526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.956661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-18 12:06:11.956696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.956829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-18 12:06:11.956881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.957011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-18 12:06:11.957045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.957179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-18 12:06:11.957214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.957382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-18 12:06:11.957415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.957548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-18 12:06:11.957584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.957711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-18 12:06:11.957759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.957891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-18 12:06:11.957928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.958083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-18 12:06:11.958121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.958367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-18 12:06:11.958405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-18 12:06:11.958568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-18 12:06:11.958603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-18 12:06:11.958740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-18 12:06:11.958801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-18 12:06:11.958956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-18 12:06:11.959006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-18 12:06:11.959159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-18 12:06:11.959210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-18 12:06:11.959316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-18 12:06:11.959350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-18 12:06:11.959534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-18 12:06:11.959582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-18 12:06:11.959719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-18 12:06:11.959767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-18 12:06:11.959912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-18 12:06:11.959949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-18 12:06:11.960179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-18 12:06:11.960214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-18 12:06:11.960353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-18 12:06:11.960390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-18 12:06:11.960566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-18 12:06:11.960615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-18 12:06:11.960769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-18 12:06:11.960823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-18 12:06:11.960982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-18 12:06:11.961040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-18 12:06:11.961183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-18 12:06:11.961249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-18 12:06:11.961434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-18 12:06:11.961469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-18 12:06:11.961601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-18 12:06:11.961655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-18 12:06:11.961816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-18 12:06:11.961850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-18 12:06:11.961955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-18 12:06:11.961989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-18 12:06:11.962162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-18 12:06:11.962209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-18 12:06:11.962352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-18 12:06:11.962388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-18 12:06:11.962587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-18 12:06:11.962641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-18 12:06:11.962827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-18 12:06:11.962867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-18 12:06:11.963016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-18 12:06:11.963072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-18 12:06:11.963328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-18 12:06:11.963386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-18 12:06:11.963526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-18 12:06:11.963562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-18 12:06:11.963675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-18 12:06:11.963710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-18 12:06:11.963892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-18 12:06:11.963945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-18 12:06:11.964226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-18 12:06:11.964286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-18 12:06:11.964463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-18 12:06:11.964509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-18 12:06:11.964682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-18 12:06:11.964720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-18 12:06:11.964948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-18 12:06:11.965003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-18 12:06:11.965191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-18 12:06:11.965252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-18 12:06:11.965410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-18 12:06:11.965446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-18 12:06:11.965615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.204 [2024-11-18 12:06:11.965664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.204 qpair failed and we were unable to recover it. 00:37:46.204 [2024-11-18 12:06:11.965794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.204 [2024-11-18 12:06:11.965842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.204 qpair failed and we were unable to recover it. 00:37:46.204 [2024-11-18 12:06:11.965958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.204 [2024-11-18 12:06:11.966014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.204 qpair failed and we were unable to recover it. 00:37:46.204 [2024-11-18 12:06:11.966159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.204 [2024-11-18 12:06:11.966198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.204 qpair failed and we were unable to recover it. 00:37:46.204 [2024-11-18 12:06:11.966369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.204 [2024-11-18 12:06:11.966407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.204 qpair failed and we were unable to recover it. 00:37:46.204 [2024-11-18 12:06:11.966591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.204 [2024-11-18 12:06:11.966626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.204 qpair failed and we were unable to recover it. 00:37:46.204 [2024-11-18 12:06:11.966760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.204 [2024-11-18 12:06:11.966825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.204 qpair failed and we were unable to recover it. 00:37:46.204 [2024-11-18 12:06:11.967002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.204 [2024-11-18 12:06:11.967040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.204 qpair failed and we were unable to recover it. 00:37:46.204 [2024-11-18 12:06:11.967290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.204 [2024-11-18 12:06:11.967369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.204 qpair failed and we were unable to recover it. 00:37:46.204 [2024-11-18 12:06:11.967533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.204 [2024-11-18 12:06:11.967568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.204 qpair failed and we were unable to recover it. 00:37:46.204 [2024-11-18 12:06:11.967706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.204 [2024-11-18 12:06:11.967741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.204 qpair failed and we were unable to recover it. 00:37:46.204 [2024-11-18 12:06:11.967947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.204 [2024-11-18 12:06:11.968004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.204 qpair failed and we were unable to recover it. 00:37:46.204 [2024-11-18 12:06:11.968201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.204 [2024-11-18 12:06:11.968257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.204 qpair failed and we were unable to recover it. 00:37:46.204 [2024-11-18 12:06:11.968361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.204 [2024-11-18 12:06:11.968395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.204 qpair failed and we were unable to recover it. 00:37:46.204 [2024-11-18 12:06:11.968503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.204 [2024-11-18 12:06:11.968538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.204 qpair failed and we were unable to recover it. 00:37:46.204 [2024-11-18 12:06:11.968671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.204 [2024-11-18 12:06:11.968722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.204 qpair failed and we were unable to recover it. 00:37:46.204 [2024-11-18 12:06:11.968857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.204 [2024-11-18 12:06:11.968892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.204 qpair failed and we were unable to recover it. 00:37:46.204 [2024-11-18 12:06:11.969003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.204 [2024-11-18 12:06:11.969038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.204 qpair failed and we were unable to recover it. 00:37:46.204 [2024-11-18 12:06:11.969142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.204 [2024-11-18 12:06:11.969178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.204 qpair failed and we were unable to recover it. 00:37:46.204 [2024-11-18 12:06:11.969315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.204 [2024-11-18 12:06:11.969350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.204 qpair failed and we were unable to recover it. 00:37:46.204 [2024-11-18 12:06:11.969488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.204 [2024-11-18 12:06:11.969534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.204 qpair failed and we were unable to recover it. 00:37:46.204 [2024-11-18 12:06:11.969659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.204 [2024-11-18 12:06:11.969692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.204 qpair failed and we were unable to recover it. 00:37:46.204 [2024-11-18 12:06:11.969831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.204 [2024-11-18 12:06:11.969865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.204 qpair failed and we were unable to recover it. 00:37:46.204 [2024-11-18 12:06:11.969973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.204 [2024-11-18 12:06:11.970007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.204 qpair failed and we were unable to recover it. 00:37:46.204 [2024-11-18 12:06:11.970193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.204 [2024-11-18 12:06:11.970246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.204 qpair failed and we were unable to recover it. 00:37:46.204 [2024-11-18 12:06:11.970407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.204 [2024-11-18 12:06:11.970441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.204 qpair failed and we were unable to recover it. 00:37:46.204 [2024-11-18 12:06:11.970581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.204 [2024-11-18 12:06:11.970636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.204 qpair failed and we were unable to recover it. 00:37:46.204 [2024-11-18 12:06:11.970746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.204 [2024-11-18 12:06:11.970780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.204 qpair failed and we were unable to recover it. 00:37:46.204 [2024-11-18 12:06:11.970958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.204 [2024-11-18 12:06:11.971011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.204 qpair failed and we were unable to recover it. 00:37:46.204 [2024-11-18 12:06:11.971211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.204 [2024-11-18 12:06:11.971265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.204 qpair failed and we were unable to recover it. 00:37:46.204 [2024-11-18 12:06:11.971370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.204 [2024-11-18 12:06:11.971405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.204 qpair failed and we were unable to recover it. 00:37:46.204 [2024-11-18 12:06:11.971544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.204 [2024-11-18 12:06:11.971579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.204 qpair failed and we were unable to recover it. 00:37:46.204 [2024-11-18 12:06:11.971737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.204 [2024-11-18 12:06:11.971789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.204 qpair failed and we were unable to recover it. 00:37:46.204 [2024-11-18 12:06:11.971958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.204 [2024-11-18 12:06:11.971994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.204 qpair failed and we were unable to recover it. 00:37:46.204 [2024-11-18 12:06:11.972196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.204 [2024-11-18 12:06:11.972259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.204 qpair failed and we were unable to recover it. 00:37:46.204 [2024-11-18 12:06:11.972426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.204 [2024-11-18 12:06:11.972460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.204 qpair failed and we were unable to recover it. 00:37:46.204 [2024-11-18 12:06:11.972705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.204 [2024-11-18 12:06:11.972771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.204 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.972931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.972985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.973190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.973245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.973405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.973439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.973584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.973623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.973787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.973841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.974027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.974064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.974222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.974295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.974423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.974475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.974665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.974713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.974894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.974947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.975165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.975221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.975343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.975399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.975536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.975571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.975705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.975738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.975954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.976014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.976163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.976214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.976365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.976402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.976564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.976598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.976750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.976784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.976905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.976942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.977098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.977137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.977258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.977295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.977485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.977565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.977718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.977767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.977883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.977918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.978146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.978223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.978404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.978444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.978618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.978652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.978786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.978821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.978937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.978986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.979151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.979204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.979360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.979412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.979550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.979585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.979745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.979799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.979910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.979944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.980078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.980131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.980284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.980332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-18 12:06:11.980478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-18 12:06:11.980523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.980664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-18 12:06:11.980718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.980903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-18 12:06:11.980942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.981160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-18 12:06:11.981225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.981383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-18 12:06:11.981417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.981549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-18 12:06:11.981584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.981696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-18 12:06:11.981730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.981857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-18 12:06:11.981894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.982036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-18 12:06:11.982073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.982222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-18 12:06:11.982260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.982411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-18 12:06:11.982448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.982587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-18 12:06:11.982621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.982751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-18 12:06:11.982785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.982912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-18 12:06:11.982949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.983089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-18 12:06:11.983150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.983305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-18 12:06:11.983342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.983478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-18 12:06:11.983541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.983648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-18 12:06:11.983682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.983832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-18 12:06:11.983881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.984081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-18 12:06:11.984121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.984303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-18 12:06:11.984342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.984513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-18 12:06:11.984547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.984669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-18 12:06:11.984703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.984839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-18 12:06:11.984891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.985066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-18 12:06:11.985102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.985251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-18 12:06:11.985288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.985440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-18 12:06:11.985474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.985635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-18 12:06:11.985683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.985804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-18 12:06:11.985840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.986076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-18 12:06:11.986137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.986272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-18 12:06:11.986305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.986462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-18 12:06:11.986511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.986659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-18 12:06:11.986692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.986800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-18 12:06:11.986853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.987003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-18 12:06:11.987041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.987150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-18 12:06:11.987199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.987325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-18 12:06:11.987362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.987467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-18 12:06:11.987511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-18 12:06:11.987632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.987666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-18 12:06:11.987796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.987847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-18 12:06:11.988000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.988036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-18 12:06:11.988232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.988298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-18 12:06:11.988420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.988456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-18 12:06:11.988589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.988637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-18 12:06:11.988774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.988809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-18 12:06:11.988995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.989033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-18 12:06:11.989239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.989294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-18 12:06:11.989436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.989473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-18 12:06:11.989644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.989682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-18 12:06:11.989878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.989951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-18 12:06:11.990230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.990297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-18 12:06:11.990465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.990518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-18 12:06:11.990701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.990746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-18 12:06:11.990885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.990936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-18 12:06:11.991078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.991133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-18 12:06:11.991342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.991379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-18 12:06:11.991566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.991600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-18 12:06:11.991768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.991803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-18 12:06:11.991956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.991994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-18 12:06:11.992136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.992173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-18 12:06:11.992341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.992377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-18 12:06:11.992534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.992569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-18 12:06:11.992741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.992789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-18 12:06:11.992951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.992991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-18 12:06:11.993125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.993179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-18 12:06:11.993321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.993358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-18 12:06:11.993528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.993563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-18 12:06:11.993724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.993758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-18 12:06:11.993943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.993980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-18 12:06:11.994139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.994191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-18 12:06:11.994395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.994435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-18 12:06:11.994614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.994650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-18 12:06:11.994803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.994840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-18 12:06:11.995047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.995084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-18 12:06:11.995267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.995325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-18 12:06:11.995451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-18 12:06:11.995496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-18 12:06:11.995671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-18 12:06:11.995719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.208 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-18 12:06:11.995881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-18 12:06:11.995934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.208 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-18 12:06:11.996075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-18 12:06:11.996146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.208 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-18 12:06:11.996353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-18 12:06:11.996388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.208 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-18 12:06:11.996525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-18 12:06:11.996560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.208 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-18 12:06:11.996698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-18 12:06:11.996738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.208 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-18 12:06:11.996842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-18 12:06:11.996875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.208 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-18 12:06:11.997025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-18 12:06:11.997074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.208 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-18 12:06:11.997294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-18 12:06:11.997353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.208 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-18 12:06:11.997514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-18 12:06:11.997579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.208 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-18 12:06:11.997724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-18 12:06:11.997775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.208 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-18 12:06:11.997951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-18 12:06:11.997988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.208 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-18 12:06:11.998200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-18 12:06:11.998258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.208 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-18 12:06:11.998401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-18 12:06:11.998439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.208 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-18 12:06:11.998627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-18 12:06:11.998676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.208 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-18 12:06:11.998867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-18 12:06:11.998920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.208 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-18 12:06:11.999094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-18 12:06:11.999159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.208 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-18 12:06:11.999375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-18 12:06:11.999413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.208 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-18 12:06:11.999590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-18 12:06:11.999638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.208 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-18 12:06:11.999807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-18 12:06:11.999847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.208 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-18 12:06:12.000068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-18 12:06:12.000108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.208 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-18 12:06:12.000361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-18 12:06:12.000421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.208 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-18 12:06:12.000589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-18 12:06:12.000624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.208 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-18 12:06:12.000776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-18 12:06:12.000824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.208 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-18 12:06:12.000981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-18 12:06:12.001062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.208 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-18 12:06:12.001270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-18 12:06:12.001324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.208 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-18 12:06:12.001465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-18 12:06:12.001532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.208 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-18 12:06:12.001671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-18 12:06:12.001711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.208 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-18 12:06:12.001881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-18 12:06:12.001919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.208 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-18 12:06:12.002063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-18 12:06:12.002137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.208 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-18 12:06:12.002290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-18 12:06:12.002341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.208 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-18 12:06:12.002458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-18 12:06:12.002505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.208 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-18 12:06:12.002630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-18 12:06:12.002664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.002796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.002833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.002954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.002991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.003118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.003158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.003286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.003323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.003476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.003535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.003690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.003725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.003904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.003967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.004149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.004201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.004334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.004368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.004477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.004538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.004661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.004699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.004888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.004944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.005085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.005147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.005315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.005349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.005474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.005535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.005708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.005745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.005855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.005890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.006039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.006074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.006204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.006244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.006362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.006397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.006521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.006569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.006682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.006717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.006880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.006914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.007048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.007087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.007201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.007239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.007411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.007449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.007620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.007655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.007799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.007852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.008069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.008127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.008391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.008449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.008620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.008655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.008836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.008873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.008984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.009022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.009215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.009273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.009445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.009512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.009647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.009680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.009833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.009866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.009978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.010030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.010212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-18 12:06:12.010249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-18 12:06:12.010413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.010446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.010553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.010587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.010697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.010730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.010880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.010918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.011069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.011106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.011217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.011254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.011433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.011481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.011662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.011699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.011850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.011899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.012087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.012125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.012274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.012312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.012436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.012469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.012616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.012650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.012835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.012878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.013114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.013170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.013290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.013327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.013452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.013501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.013654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.013688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.013789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.013839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.013996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.014047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.014227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.014264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.014435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.014488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.014650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.014686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.014845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.014883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.015003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.015042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.015217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.015255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.015402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.015450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.015579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.015616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.015728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.015782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.015965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.016003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.016252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.016313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.016509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.016544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.016665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.016712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.016925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.016985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.017140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.017179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.017305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.017342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.017475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.017523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.017662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.017697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.017882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.017937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.018089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.018143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.018399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-18 12:06:12.018452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-18 12:06:12.018673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.018721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.018861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.018898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.019057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.019096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.019328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.019367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.019568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.019616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.019759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.019795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.019942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.019980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.020181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.020234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.020397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.020431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.020559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.020607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.020726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.020762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.020900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.020936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.021064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.021109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.021247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.021300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.021439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.021477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.021625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.021659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.021771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.021805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.021911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.021946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.022103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.022142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.022313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.022352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.022499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.022539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.022697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.022731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.022863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.022917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.023034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.023073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.023205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.023243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.023388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.023426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.023622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.023670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.023786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.023823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.023930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.023964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.024111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.024163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.024284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.024318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.024453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.024488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.024611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.024646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.024808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.024842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.024959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.024993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.025107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.025141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.025308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.025342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.025464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.025518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.025664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.025700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.025899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.025953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.026105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-18 12:06:12.026144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-18 12:06:12.026318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.212 [2024-11-18 12:06:12.026351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.212 qpair failed and we were unable to recover it. 00:37:46.212 [2024-11-18 12:06:12.026488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.212 [2024-11-18 12:06:12.026528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.212 qpair failed and we were unable to recover it. 00:37:46.212 [2024-11-18 12:06:12.026678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.212 [2024-11-18 12:06:12.026712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.212 qpair failed and we were unable to recover it. 00:37:46.493 [2024-11-18 12:06:12.026821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-11-18 12:06:12.026884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-11-18 12:06:12.027041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-11-18 12:06:12.027080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-11-18 12:06:12.027230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-11-18 12:06:12.027269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-11-18 12:06:12.027387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-11-18 12:06:12.027426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-11-18 12:06:12.027574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-11-18 12:06:12.027611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-11-18 12:06:12.027793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-11-18 12:06:12.027834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-11-18 12:06:12.027986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-11-18 12:06:12.028024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-11-18 12:06:12.028184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-11-18 12:06:12.028224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-11-18 12:06:12.028425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-11-18 12:06:12.028467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-11-18 12:06:12.028623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-11-18 12:06:12.028671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-11-18 12:06:12.028837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-11-18 12:06:12.028877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-11-18 12:06:12.029002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-11-18 12:06:12.029040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-11-18 12:06:12.029156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-11-18 12:06:12.029194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.493 [2024-11-18 12:06:12.029312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.493 [2024-11-18 12:06:12.029350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.493 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.029471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-11-18 12:06:12.029517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.029683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-11-18 12:06:12.029731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.029851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-11-18 12:06:12.029887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.030047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-11-18 12:06:12.030085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.030231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-11-18 12:06:12.030269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.030387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-11-18 12:06:12.030425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.030591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-11-18 12:06:12.030625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.030772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-11-18 12:06:12.030810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.030966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-11-18 12:06:12.031004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.031144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-11-18 12:06:12.031181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.031357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-11-18 12:06:12.031397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.031586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-11-18 12:06:12.031634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.031783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-11-18 12:06:12.031820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.031974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-11-18 12:06:12.032027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.032181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-11-18 12:06:12.032236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.032350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-11-18 12:06:12.032385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.032537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-11-18 12:06:12.032598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.032755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-11-18 12:06:12.032806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.033014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-11-18 12:06:12.033079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.033233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-11-18 12:06:12.033295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.033503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-11-18 12:06:12.033551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.033700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-11-18 12:06:12.033737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.033866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-11-18 12:06:12.033901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.034042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-11-18 12:06:12.034077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.034217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-11-18 12:06:12.034253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.034356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-11-18 12:06:12.034391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.034528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-11-18 12:06:12.034562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.034722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-11-18 12:06:12.034757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.034893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-11-18 12:06:12.034927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.035077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-11-18 12:06:12.035125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.035268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-11-18 12:06:12.035304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.035478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-11-18 12:06:12.035528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.035665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-11-18 12:06:12.035700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.035863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-11-18 12:06:12.035897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.036066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-11-18 12:06:12.036106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.036214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-11-18 12:06:12.036249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.036388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.494 [2024-11-18 12:06:12.036421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.494 qpair failed and we were unable to recover it. 00:37:46.494 [2024-11-18 12:06:12.036602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.036656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-11-18 12:06:12.036812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.036852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-11-18 12:06:12.037068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.037133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-11-18 12:06:12.037321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.037378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-11-18 12:06:12.037583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.037619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-11-18 12:06:12.037744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.037812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-11-18 12:06:12.037999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.038038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-11-18 12:06:12.038237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.038308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-11-18 12:06:12.038426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.038463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-11-18 12:06:12.038626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.038660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-11-18 12:06:12.038838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.038877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-11-18 12:06:12.039000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.039038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-11-18 12:06:12.039189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.039227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-11-18 12:06:12.039345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.039382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-11-18 12:06:12.039544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.039578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-11-18 12:06:12.039678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.039712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-11-18 12:06:12.039828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.039864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-11-18 12:06:12.040024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.040061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-11-18 12:06:12.040270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.040307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-11-18 12:06:12.040499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.040540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-11-18 12:06:12.040664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.040697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-11-18 12:06:12.040838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.040871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-11-18 12:06:12.040981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.041034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-11-18 12:06:12.041230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.041293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-11-18 12:06:12.041482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.041522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-11-18 12:06:12.041663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.041698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-11-18 12:06:12.041816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.041853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-11-18 12:06:12.042050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.042088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-11-18 12:06:12.042214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.042247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-11-18 12:06:12.042397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.042450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-11-18 12:06:12.042618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.042667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-11-18 12:06:12.042856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.042904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-11-18 12:06:12.043122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.043181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-11-18 12:06:12.043421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.043455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-11-18 12:06:12.043622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.043657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-11-18 12:06:12.043807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.043878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-11-18 12:06:12.044070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.044127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.495 qpair failed and we were unable to recover it. 00:37:46.495 [2024-11-18 12:06:12.044395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.495 [2024-11-18 12:06:12.044435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.044594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-11-18 12:06:12.044642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.044776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-11-18 12:06:12.044824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.045105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-11-18 12:06:12.045186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.045396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-11-18 12:06:12.045455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.045611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-11-18 12:06:12.045646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.045794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-11-18 12:06:12.045847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.045981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-11-18 12:06:12.046015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.046114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-11-18 12:06:12.046148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.046289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-11-18 12:06:12.046327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.046468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-11-18 12:06:12.046510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.046686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-11-18 12:06:12.046740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.046997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-11-18 12:06:12.047037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.047281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-11-18 12:06:12.047341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.047515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-11-18 12:06:12.047551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.047705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-11-18 12:06:12.047742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.047864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-11-18 12:06:12.047902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.048082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-11-18 12:06:12.048119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.048344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-11-18 12:06:12.048397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.048518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-11-18 12:06:12.048553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.048701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-11-18 12:06:12.048753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.048906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-11-18 12:06:12.048958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.049093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-11-18 12:06:12.049146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.049311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-11-18 12:06:12.049346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.049505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-11-18 12:06:12.049554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.049700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-11-18 12:06:12.049735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.049917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-11-18 12:06:12.049956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.050253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-11-18 12:06:12.050323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.050479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-11-18 12:06:12.050544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.050686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-11-18 12:06:12.050719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.050871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-11-18 12:06:12.050909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.051082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-11-18 12:06:12.051120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.051266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-11-18 12:06:12.051303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.051506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-11-18 12:06:12.051554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.051738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-11-18 12:06:12.051786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.051903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-11-18 12:06:12.051940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.052074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-11-18 12:06:12.052113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.052302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.496 [2024-11-18 12:06:12.052340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.496 qpair failed and we were unable to recover it. 00:37:46.496 [2024-11-18 12:06:12.052483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.052546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.052683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.052717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.052870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.052915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.053058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.053096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.053236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.053275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.053437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.053475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.053640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.053687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.053858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.053911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.054072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.054123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.054295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.054348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.054517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.054552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.054680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.054714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.054871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.054909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.055094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.055163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.055344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.055382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.055533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.055581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.055744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.055813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.056107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.056183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.056337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.056376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.056507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.056542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.056651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.056685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.056845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.056885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.057187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.057244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.057392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.057431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.057597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.057633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.057808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.057876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.058134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.058195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.058331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.058366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.058528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.058564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.058687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.058736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.058862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.058909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.059027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.059062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.059197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.059232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.059379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.059414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.059519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.059553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.059659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.059692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.059833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.059873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.059981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.497 [2024-11-18 12:06:12.060015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.497 qpair failed and we were unable to recover it. 00:37:46.497 [2024-11-18 12:06:12.060202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-11-18 12:06:12.060257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-11-18 12:06:12.060477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-11-18 12:06:12.060522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-11-18 12:06:12.060671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-11-18 12:06:12.060705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-11-18 12:06:12.060867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-11-18 12:06:12.060905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-11-18 12:06:12.061050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-11-18 12:06:12.061109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-11-18 12:06:12.061350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-11-18 12:06:12.061402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-11-18 12:06:12.061573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-11-18 12:06:12.061609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-11-18 12:06:12.061739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-11-18 12:06:12.061794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-11-18 12:06:12.061986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-11-18 12:06:12.062043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-11-18 12:06:12.062205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-11-18 12:06:12.062283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-11-18 12:06:12.062443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-11-18 12:06:12.062477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-11-18 12:06:12.062643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-11-18 12:06:12.062682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-11-18 12:06:12.062850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-11-18 12:06:12.062904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-11-18 12:06:12.063128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-11-18 12:06:12.063188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-11-18 12:06:12.063422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-11-18 12:06:12.063461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-11-18 12:06:12.063607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-11-18 12:06:12.063642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-11-18 12:06:12.063797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-11-18 12:06:12.063834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-11-18 12:06:12.064020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-11-18 12:06:12.064084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-11-18 12:06:12.064305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-11-18 12:06:12.064343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-11-18 12:06:12.064488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-11-18 12:06:12.064552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-11-18 12:06:12.064684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-11-18 12:06:12.064718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-11-18 12:06:12.064816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-11-18 12:06:12.064866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-11-18 12:06:12.065053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-11-18 12:06:12.065087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-11-18 12:06:12.065246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-11-18 12:06:12.065283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-11-18 12:06:12.065465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-11-18 12:06:12.065509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-11-18 12:06:12.065670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-11-18 12:06:12.065718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-11-18 12:06:12.065855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-11-18 12:06:12.065897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-11-18 12:06:12.066061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-11-18 12:06:12.066099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-11-18 12:06:12.066215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-11-18 12:06:12.066253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-11-18 12:06:12.066454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-11-18 12:06:12.066502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-11-18 12:06:12.066649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-11-18 12:06:12.066697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-11-18 12:06:12.066849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.498 [2024-11-18 12:06:12.066888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.498 qpair failed and we were unable to recover it. 00:37:46.498 [2024-11-18 12:06:12.067136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.067184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-11-18 12:06:12.067344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.067381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-11-18 12:06:12.067503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.067549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-11-18 12:06:12.067709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.067761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-11-18 12:06:12.067898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.067933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-11-18 12:06:12.068085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.068132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-11-18 12:06:12.068278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.068316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-11-18 12:06:12.068470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.068528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-11-18 12:06:12.068645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.068682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-11-18 12:06:12.068822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.068859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-11-18 12:06:12.068994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.069028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-11-18 12:06:12.069139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.069175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-11-18 12:06:12.069309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.069348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-11-18 12:06:12.069483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.069524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-11-18 12:06:12.069633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.069668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-11-18 12:06:12.069810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.069862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-11-18 12:06:12.070049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.070104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-11-18 12:06:12.070229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.070264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-11-18 12:06:12.070431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.070464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-11-18 12:06:12.070598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.070645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-11-18 12:06:12.070818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.070854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-11-18 12:06:12.070991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.071025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-11-18 12:06:12.071164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.071198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-11-18 12:06:12.071305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.071339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-11-18 12:06:12.071461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.071525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-11-18 12:06:12.071688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.071743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-11-18 12:06:12.072042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.072113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-11-18 12:06:12.072242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.072282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-11-18 12:06:12.072436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.072474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-11-18 12:06:12.072643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.072678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-11-18 12:06:12.072864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.072916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-11-18 12:06:12.073040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.073092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-11-18 12:06:12.073276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.073332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-11-18 12:06:12.073468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.073508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-11-18 12:06:12.073694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.073747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-11-18 12:06:12.073907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.073947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-11-18 12:06:12.074092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.074129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.499 [2024-11-18 12:06:12.074274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.499 [2024-11-18 12:06:12.074312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.499 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.074429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-11-18 12:06:12.074466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.074622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-11-18 12:06:12.074691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.074890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-11-18 12:06:12.074943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.075097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-11-18 12:06:12.075148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.075251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-11-18 12:06:12.075286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.075432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-11-18 12:06:12.075467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.075620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-11-18 12:06:12.075681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.075827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-11-18 12:06:12.075879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.076026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-11-18 12:06:12.076063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.076205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-11-18 12:06:12.076242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.076372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-11-18 12:06:12.076406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.076535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-11-18 12:06:12.076569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.076699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-11-18 12:06:12.076736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.076879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-11-18 12:06:12.076916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.077052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-11-18 12:06:12.077096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.077268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-11-18 12:06:12.077323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.077451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-11-18 12:06:12.077506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.077618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-11-18 12:06:12.077673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.077824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-11-18 12:06:12.077862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.078066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-11-18 12:06:12.078104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.078216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-11-18 12:06:12.078254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.078431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-11-18 12:06:12.078470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.078617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-11-18 12:06:12.078657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.078865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-11-18 12:06:12.078918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.079071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-11-18 12:06:12.079114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.079293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-11-18 12:06:12.079345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.079503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-11-18 12:06:12.079538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.079677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-11-18 12:06:12.079711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.079935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-11-18 12:06:12.080007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.080326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-11-18 12:06:12.080386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.080509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-11-18 12:06:12.080564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.080730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-11-18 12:06:12.080764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.080980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-11-18 12:06:12.081018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.081233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-11-18 12:06:12.081267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.081398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-11-18 12:06:12.081448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.081589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-11-18 12:06:12.081623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.081730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.500 [2024-11-18 12:06:12.081763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.500 qpair failed and we were unable to recover it. 00:37:46.500 [2024-11-18 12:06:12.081923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.081960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.082158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.082195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.082368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.082405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.082548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.082582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.082737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.082786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.082942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.082997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.083150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.083198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.083332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.083367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.083517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.083566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.083729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.083765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.083925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.083960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.084073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.084107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.084276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.084314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.084465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.084511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.084665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.084700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.084852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.084889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.085010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.085047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.085198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.085243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.085433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.085470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.085581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.085616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.085751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.085790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.086052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.086111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.086231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.086269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.086399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.086436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.086602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.086649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.086804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.086857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.087179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.087239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.087376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.087410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.087552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.087586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.087716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.087750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.087866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.087903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.088053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.088090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.088239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.088276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.088430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.088471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.088646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.088685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.088859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.088927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.089158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.089196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.089314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.501 [2024-11-18 12:06:12.089351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.501 qpair failed and we were unable to recover it. 00:37:46.501 [2024-11-18 12:06:12.089552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-11-18 12:06:12.089587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-11-18 12:06:12.089722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-11-18 12:06:12.089775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-11-18 12:06:12.089951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-11-18 12:06:12.089988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-11-18 12:06:12.090187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-11-18 12:06:12.090252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-11-18 12:06:12.090487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-11-18 12:06:12.090527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-11-18 12:06:12.090686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-11-18 12:06:12.090720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-11-18 12:06:12.090861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-11-18 12:06:12.090900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-11-18 12:06:12.091206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-11-18 12:06:12.091244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-11-18 12:06:12.091367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-11-18 12:06:12.091405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-11-18 12:06:12.091543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-11-18 12:06:12.091578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-11-18 12:06:12.091709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-11-18 12:06:12.091743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-11-18 12:06:12.091923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-11-18 12:06:12.091961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-11-18 12:06:12.092106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-11-18 12:06:12.092181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-11-18 12:06:12.092386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-11-18 12:06:12.092423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-11-18 12:06:12.092592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-11-18 12:06:12.092627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-11-18 12:06:12.096650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-11-18 12:06:12.096700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-11-18 12:06:12.096827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-11-18 12:06:12.096863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-11-18 12:06:12.097005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-11-18 12:06:12.097059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-11-18 12:06:12.097279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-11-18 12:06:12.097347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-11-18 12:06:12.097507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-11-18 12:06:12.097542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-11-18 12:06:12.097682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-11-18 12:06:12.097716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-11-18 12:06:12.097833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-11-18 12:06:12.097866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-11-18 12:06:12.098022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-11-18 12:06:12.098060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-11-18 12:06:12.098205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-11-18 12:06:12.098242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-11-18 12:06:12.098398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-11-18 12:06:12.098432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-11-18 12:06:12.098579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-11-18 12:06:12.098613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-11-18 12:06:12.098712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-11-18 12:06:12.098764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-11-18 12:06:12.098885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-11-18 12:06:12.098923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-11-18 12:06:12.099071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-11-18 12:06:12.099108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-11-18 12:06:12.099215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-11-18 12:06:12.099252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-11-18 12:06:12.099369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-11-18 12:06:12.099418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-11-18 12:06:12.099604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-11-18 12:06:12.099657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-11-18 12:06:12.099811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.502 [2024-11-18 12:06:12.099864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.502 qpair failed and we were unable to recover it. 00:37:46.502 [2024-11-18 12:06:12.100038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.100094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-11-18 12:06:12.100278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.100317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-11-18 12:06:12.100452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.100487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-11-18 12:06:12.100637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.100671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-11-18 12:06:12.100797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.100836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-11-18 12:06:12.101022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.101083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-11-18 12:06:12.101320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.101358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-11-18 12:06:12.101505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.101558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-11-18 12:06:12.101691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.101724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-11-18 12:06:12.101870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.101907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-11-18 12:06:12.102082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.102157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-11-18 12:06:12.102361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.102401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-11-18 12:06:12.102539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.102574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-11-18 12:06:12.102704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.102766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-11-18 12:06:12.102930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.102985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-11-18 12:06:12.103176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.103229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-11-18 12:06:12.103380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.103419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-11-18 12:06:12.103539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.103573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-11-18 12:06:12.103731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.103770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-11-18 12:06:12.103941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.103979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-11-18 12:06:12.104128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.104208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-11-18 12:06:12.104344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.104381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-11-18 12:06:12.104539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.104573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-11-18 12:06:12.104679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.104712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-11-18 12:06:12.104867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.104918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-11-18 12:06:12.105104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.105141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-11-18 12:06:12.105268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.105301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-11-18 12:06:12.105459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.105500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-11-18 12:06:12.105639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.105673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-11-18 12:06:12.105804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.105856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-11-18 12:06:12.106007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.106059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-11-18 12:06:12.106311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.106364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-11-18 12:06:12.106501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.106556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-11-18 12:06:12.106696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.106731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-11-18 12:06:12.106851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.106959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-11-18 12:06:12.107220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.107279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-11-18 12:06:12.107422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.107457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.503 qpair failed and we were unable to recover it. 00:37:46.503 [2024-11-18 12:06:12.107588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.503 [2024-11-18 12:06:12.107622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-11-18 12:06:12.107810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-11-18 12:06:12.107848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-11-18 12:06:12.108056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-11-18 12:06:12.108115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-11-18 12:06:12.108349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-11-18 12:06:12.108386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-11-18 12:06:12.108544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-11-18 12:06:12.108578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-11-18 12:06:12.108717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-11-18 12:06:12.108768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-11-18 12:06:12.108975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-11-18 12:06:12.109012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-11-18 12:06:12.109168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-11-18 12:06:12.109225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-11-18 12:06:12.109373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-11-18 12:06:12.109410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-11-18 12:06:12.109566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-11-18 12:06:12.109600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-11-18 12:06:12.109734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-11-18 12:06:12.109786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-11-18 12:06:12.109903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-11-18 12:06:12.109954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-11-18 12:06:12.110131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-11-18 12:06:12.110186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-11-18 12:06:12.110334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-11-18 12:06:12.110371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-11-18 12:06:12.110582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-11-18 12:06:12.110631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-11-18 12:06:12.110804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-11-18 12:06:12.110845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-11-18 12:06:12.111019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-11-18 12:06:12.111064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-11-18 12:06:12.111212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-11-18 12:06:12.111258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-11-18 12:06:12.111440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-11-18 12:06:12.111473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-11-18 12:06:12.111597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-11-18 12:06:12.111645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-11-18 12:06:12.111780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-11-18 12:06:12.111815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-11-18 12:06:12.112064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-11-18 12:06:12.112101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-11-18 12:06:12.112217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-11-18 12:06:12.112255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-11-18 12:06:12.112373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-11-18 12:06:12.112410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-11-18 12:06:12.112595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-11-18 12:06:12.112644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-11-18 12:06:12.112751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-11-18 12:06:12.112787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-11-18 12:06:12.112942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-11-18 12:06:12.112997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-11-18 12:06:12.113196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-11-18 12:06:12.113250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-11-18 12:06:12.113381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-11-18 12:06:12.113415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-11-18 12:06:12.113576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-11-18 12:06:12.113611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-11-18 12:06:12.113724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-11-18 12:06:12.113759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-11-18 12:06:12.113931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-11-18 12:06:12.113965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-11-18 12:06:12.114094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-11-18 12:06:12.114128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-11-18 12:06:12.114287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-11-18 12:06:12.114324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-11-18 12:06:12.114486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-11-18 12:06:12.114549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-11-18 12:06:12.114678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-11-18 12:06:12.114715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-11-18 12:06:12.114955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-11-18 12:06:12.114993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.504 qpair failed and we were unable to recover it. 00:37:46.504 [2024-11-18 12:06:12.115174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.504 [2024-11-18 12:06:12.115236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.115383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-11-18 12:06:12.115419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.115588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-11-18 12:06:12.115624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.115776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-11-18 12:06:12.115829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.116007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-11-18 12:06:12.116046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.116185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-11-18 12:06:12.116223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.116347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-11-18 12:06:12.116382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.116545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-11-18 12:06:12.116580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.116680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-11-18 12:06:12.116713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.116844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-11-18 12:06:12.116883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.116997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-11-18 12:06:12.117034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.117178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-11-18 12:06:12.117215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.117358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-11-18 12:06:12.117396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.117523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-11-18 12:06:12.117557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.117717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-11-18 12:06:12.117751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.117901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-11-18 12:06:12.117939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.118057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-11-18 12:06:12.118094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.118246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-11-18 12:06:12.118285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.118445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-11-18 12:06:12.118481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.118667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-11-18 12:06:12.118708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.118865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-11-18 12:06:12.118917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.119078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-11-18 12:06:12.119147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.119246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-11-18 12:06:12.119280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.119419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-11-18 12:06:12.119454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.119608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-11-18 12:06:12.119643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.119752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-11-18 12:06:12.119787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.120026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-11-18 12:06:12.120086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.120272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-11-18 12:06:12.120340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.120485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-11-18 12:06:12.120550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.120656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-11-18 12:06:12.120690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.120823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-11-18 12:06:12.120875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.121112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-11-18 12:06:12.121168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.121310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-11-18 12:06:12.121349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.121550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-11-18 12:06:12.121585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.121688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-11-18 12:06:12.121721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.121869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-11-18 12:06:12.121920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.122063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-11-18 12:06:12.122100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.122243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.505 [2024-11-18 12:06:12.122280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.505 qpair failed and we were unable to recover it. 00:37:46.505 [2024-11-18 12:06:12.122417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.122450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.122591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.122625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.122768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.122817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.123006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.123047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.123192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.123231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.123344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.123394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.123542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.123590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.123733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.123769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.123965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.124041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.124189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.124226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.124346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.124383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.124540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.124574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.124759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.124798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.124973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.125010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.125167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.125200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.125369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.125410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.125600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.125636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.125781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.125818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.125971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.126009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.126156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.126194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.126355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.126389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.126525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.126564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.126721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.126784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.126961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.127002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.127178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.127230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.127374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.127412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.127548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.127582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.127733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.127767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.127866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.127918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.128060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.128097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.128224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.128274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.128454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.128507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.128676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.128724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.128865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.128901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.129054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.129089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.129242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.129276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.129431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.129468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.129641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.506 [2024-11-18 12:06:12.129676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.506 qpair failed and we were unable to recover it. 00:37:46.506 [2024-11-18 12:06:12.129877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-11-18 12:06:12.129930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-11-18 12:06:12.130067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-11-18 12:06:12.130121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-11-18 12:06:12.130296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-11-18 12:06:12.130335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-11-18 12:06:12.130509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-11-18 12:06:12.130544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-11-18 12:06:12.130701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-11-18 12:06:12.130735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-11-18 12:06:12.130912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-11-18 12:06:12.130986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-11-18 12:06:12.131135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-11-18 12:06:12.131172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-11-18 12:06:12.131376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-11-18 12:06:12.131413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-11-18 12:06:12.131516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-11-18 12:06:12.131567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-11-18 12:06:12.131675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-11-18 12:06:12.131709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-11-18 12:06:12.131850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-11-18 12:06:12.131885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-11-18 12:06:12.132028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-11-18 12:06:12.132092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-11-18 12:06:12.132240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-11-18 12:06:12.132278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-11-18 12:06:12.132452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-11-18 12:06:12.132511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-11-18 12:06:12.132650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-11-18 12:06:12.132687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-11-18 12:06:12.132832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-11-18 12:06:12.132866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-11-18 12:06:12.133024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-11-18 12:06:12.133077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-11-18 12:06:12.133240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-11-18 12:06:12.133292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-11-18 12:06:12.133447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-11-18 12:06:12.133500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-11-18 12:06:12.133650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-11-18 12:06:12.133686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-11-18 12:06:12.133925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-11-18 12:06:12.133996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-11-18 12:06:12.134302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-11-18 12:06:12.134367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-11-18 12:06:12.134540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-11-18 12:06:12.134575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-11-18 12:06:12.134705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-11-18 12:06:12.134745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-11-18 12:06:12.134875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-11-18 12:06:12.134909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-11-18 12:06:12.135057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-11-18 12:06:12.135091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-11-18 12:06:12.135238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-11-18 12:06:12.135271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-11-18 12:06:12.135451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-11-18 12:06:12.135511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-11-18 12:06:12.135663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-11-18 12:06:12.135699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-11-18 12:06:12.135857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-11-18 12:06:12.135911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-11-18 12:06:12.136094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-11-18 12:06:12.136144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-11-18 12:06:12.136315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-11-18 12:06:12.136378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.507 [2024-11-18 12:06:12.136520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.507 [2024-11-18 12:06:12.136556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.507 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.136692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.136727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.136866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.136917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.137059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.137097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.137247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.137286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.137500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.137548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.137709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.137757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.137894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.137933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.138076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.138113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.138335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.138373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.138517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.138571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.138713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.138747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.138962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.138999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.139256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.139292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.139451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.139488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.139649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.139697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.139873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.139928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.140086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.140149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.140260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.140295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.140436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.140473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.140619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.140653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.140782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.140817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.140947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.140986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.141198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.141235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.141352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.141389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.141517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.141554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.141686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.141720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.141885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.141938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.142088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.142140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.142243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.142276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.142449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.142502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.142620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.142662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.142821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.142859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.142993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.143091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.143299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.143349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.143473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.143542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.143705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.143738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.143939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.144013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.508 qpair failed and we were unable to recover it. 00:37:46.508 [2024-11-18 12:06:12.144202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.508 [2024-11-18 12:06:12.144261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.144420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-11-18 12:06:12.144453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.144621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-11-18 12:06:12.144655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.144779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-11-18 12:06:12.144848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.145048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-11-18 12:06:12.145114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.145304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-11-18 12:06:12.145358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.145501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-11-18 12:06:12.145537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.145650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-11-18 12:06:12.145685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.145893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-11-18 12:06:12.145946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.146161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-11-18 12:06:12.146224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.146365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-11-18 12:06:12.146402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.146550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-11-18 12:06:12.146603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.146724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-11-18 12:06:12.146761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.146881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-11-18 12:06:12.146918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.147064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-11-18 12:06:12.147103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.147227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-11-18 12:06:12.147263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.147395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-11-18 12:06:12.147429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.147604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-11-18 12:06:12.147638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.147793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-11-18 12:06:12.147831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.147967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-11-18 12:06:12.148001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.148118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-11-18 12:06:12.148152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.148319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-11-18 12:06:12.148353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.148486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-11-18 12:06:12.148526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.148644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-11-18 12:06:12.148678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.148790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-11-18 12:06:12.148824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.149020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-11-18 12:06:12.149074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.149340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-11-18 12:06:12.149400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.149575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-11-18 12:06:12.149611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.149745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-11-18 12:06:12.149800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.149981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-11-18 12:06:12.150018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.150207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-11-18 12:06:12.150273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.150392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-11-18 12:06:12.150430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.150573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-11-18 12:06:12.150607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.150758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-11-18 12:06:12.150801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.150975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-11-18 12:06:12.151013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.151153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-11-18 12:06:12.151190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.151371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-11-18 12:06:12.151419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.151569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.509 [2024-11-18 12:06:12.151617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.509 qpair failed and we were unable to recover it. 00:37:46.509 [2024-11-18 12:06:12.151780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.151847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.152062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.152122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.152375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.152408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.152528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.152561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.152695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.152730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.152905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.152985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.153212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.153269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.153385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.153422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.153564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.153614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.153779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.153828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.153940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.153976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.154150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.154201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.154321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.154354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.154488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.154529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.154664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.154698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.154797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.154848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.154968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.155017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.155177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.155214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.155326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.155376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.155515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.155549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.155676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.155710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.155856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.155893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.156037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.156075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.156230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.156271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.156453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.156506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.156680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.156716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.156824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.156859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.156986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.157040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.157161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.157209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.157370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.157404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.157539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.157588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.157734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.157770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.157908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.157942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.158041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.158094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.158323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.158361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.158478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.158534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.158685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.158720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.158920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.510 [2024-11-18 12:06:12.158975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.510 qpair failed and we were unable to recover it. 00:37:46.510 [2024-11-18 12:06:12.159096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-11-18 12:06:12.159134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-11-18 12:06:12.159279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-11-18 12:06:12.159314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-11-18 12:06:12.159429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-11-18 12:06:12.159464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-11-18 12:06:12.159614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-11-18 12:06:12.159649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-11-18 12:06:12.159759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-11-18 12:06:12.159794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-11-18 12:06:12.159903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-11-18 12:06:12.159938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-11-18 12:06:12.160071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-11-18 12:06:12.160105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-11-18 12:06:12.160263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-11-18 12:06:12.160297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-11-18 12:06:12.160439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-11-18 12:06:12.160474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-11-18 12:06:12.160629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-11-18 12:06:12.160683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-11-18 12:06:12.160830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-11-18 12:06:12.160883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-11-18 12:06:12.161111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-11-18 12:06:12.161170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-11-18 12:06:12.161285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-11-18 12:06:12.161323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-11-18 12:06:12.161473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-11-18 12:06:12.161538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-11-18 12:06:12.161649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-11-18 12:06:12.161683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-11-18 12:06:12.161805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-11-18 12:06:12.161843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-11-18 12:06:12.162011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-11-18 12:06:12.162064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-11-18 12:06:12.162217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-11-18 12:06:12.162268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-11-18 12:06:12.162375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-11-18 12:06:12.162409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-11-18 12:06:12.162569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-11-18 12:06:12.162622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-11-18 12:06:12.162789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-11-18 12:06:12.162825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-11-18 12:06:12.162942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-11-18 12:06:12.162977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-11-18 12:06:12.163122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-11-18 12:06:12.163156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-11-18 12:06:12.163314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-11-18 12:06:12.163350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-11-18 12:06:12.163479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-11-18 12:06:12.163535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-11-18 12:06:12.163691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-11-18 12:06:12.163739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-11-18 12:06:12.163884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-11-18 12:06:12.163939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-11-18 12:06:12.164083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-11-18 12:06:12.164119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-11-18 12:06:12.164349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-11-18 12:06:12.164386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-11-18 12:06:12.164541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-11-18 12:06:12.164575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-11-18 12:06:12.164737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-11-18 12:06:12.164788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-11-18 12:06:12.164909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-11-18 12:06:12.164951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-11-18 12:06:12.165127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-11-18 12:06:12.165165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-11-18 12:06:12.165326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-11-18 12:06:12.165363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-11-18 12:06:12.165474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-11-18 12:06:12.165532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-11-18 12:06:12.165639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-11-18 12:06:12.165672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.511 [2024-11-18 12:06:12.165810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.511 [2024-11-18 12:06:12.165857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.511 qpair failed and we were unable to recover it. 00:37:46.512 [2024-11-18 12:06:12.166002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-11-18 12:06:12.166042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-11-18 12:06:12.166311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-11-18 12:06:12.166366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-11-18 12:06:12.166513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-11-18 12:06:12.166564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-11-18 12:06:12.166724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-11-18 12:06:12.166758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-11-18 12:06:12.166882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-11-18 12:06:12.166930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-11-18 12:06:12.167100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-11-18 12:06:12.167154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-11-18 12:06:12.167320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-11-18 12:06:12.167373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-11-18 12:06:12.167577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-11-18 12:06:12.167613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-11-18 12:06:12.167751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-11-18 12:06:12.167802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-11-18 12:06:12.168008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-11-18 12:06:12.168045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-11-18 12:06:12.168267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-11-18 12:06:12.168334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-11-18 12:06:12.168517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-11-18 12:06:12.168570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-11-18 12:06:12.168705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-11-18 12:06:12.168739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-11-18 12:06:12.168958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-11-18 12:06:12.169025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-11-18 12:06:12.169209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-11-18 12:06:12.169248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-11-18 12:06:12.169451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-11-18 12:06:12.169506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-11-18 12:06:12.169627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-11-18 12:06:12.169663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-11-18 12:06:12.169838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-11-18 12:06:12.169891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-11-18 12:06:12.170117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-11-18 12:06:12.170175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-11-18 12:06:12.170327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-11-18 12:06:12.170390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-11-18 12:06:12.170539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-11-18 12:06:12.170573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-11-18 12:06:12.170712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-11-18 12:06:12.170747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-11-18 12:06:12.170945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-11-18 12:06:12.171015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-11-18 12:06:12.171211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-11-18 12:06:12.171272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-11-18 12:06:12.171422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-11-18 12:06:12.171461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-11-18 12:06:12.171622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-11-18 12:06:12.171670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-11-18 12:06:12.171794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-11-18 12:06:12.171842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-11-18 12:06:12.172035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-11-18 12:06:12.172101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-11-18 12:06:12.172315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-11-18 12:06:12.172375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-11-18 12:06:12.172485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-11-18 12:06:12.172525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-11-18 12:06:12.172694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-11-18 12:06:12.172746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-11-18 12:06:12.172887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-11-18 12:06:12.172940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-11-18 12:06:12.173093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.512 [2024-11-18 12:06:12.173144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.512 qpair failed and we were unable to recover it. 00:37:46.512 [2024-11-18 12:06:12.173287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.173321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.173428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.173462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.173602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.173640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.173763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.173810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.173947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.173996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.174145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.174180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.174286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.174321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.174452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.174501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.174639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.174677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.174799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.174836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.174959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.174997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.175212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.175266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.175418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.175465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.175636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.175685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.175836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.175870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.176012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.176062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.176233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.176270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.176393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.176431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.176608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.176657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.176864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.176917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.177133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.177193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.177375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.177409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.177571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.177606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.177755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.177803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.177921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.177958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.178178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.178235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.178378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.178414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.178584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.178632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.178790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.178830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.179033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.179097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.179244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.179281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.179436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.179475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.179664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.179712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.179870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.179909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.180173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.180234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.180409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.180447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.180626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.180660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.180807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.513 [2024-11-18 12:06:12.180844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.513 qpair failed and we were unable to recover it. 00:37:46.513 [2024-11-18 12:06:12.180995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.181065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-11-18 12:06:12.181223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.181277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-11-18 12:06:12.181415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.181452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-11-18 12:06:12.181607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.181641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-11-18 12:06:12.181745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.181779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-11-18 12:06:12.181935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.181972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-11-18 12:06:12.182148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.182185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-11-18 12:06:12.182326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.182363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-11-18 12:06:12.182517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.182568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-11-18 12:06:12.182708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.182748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-11-18 12:06:12.182901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.182953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-11-18 12:06:12.183101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.183139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-11-18 12:06:12.183291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.183328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-11-18 12:06:12.183531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.183580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-11-18 12:06:12.183712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.183781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-11-18 12:06:12.183937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.183977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-11-18 12:06:12.184086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.184124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-11-18 12:06:12.184242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.184279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-11-18 12:06:12.184417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.184455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-11-18 12:06:12.184614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.184649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-11-18 12:06:12.184779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.184816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-11-18 12:06:12.184989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.185026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-11-18 12:06:12.185212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.185252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-11-18 12:06:12.185423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.185461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-11-18 12:06:12.185650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.185699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-11-18 12:06:12.185838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.185875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-11-18 12:06:12.185991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.186027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-11-18 12:06:12.186237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.186299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-11-18 12:06:12.186448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.186487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-11-18 12:06:12.186635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.186683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-11-18 12:06:12.186842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.186898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-11-18 12:06:12.187049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.187101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-11-18 12:06:12.187235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.187269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-11-18 12:06:12.187410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.187445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-11-18 12:06:12.187562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.187597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-11-18 12:06:12.187735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.187769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-11-18 12:06:12.187912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.187949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.514 [2024-11-18 12:06:12.188167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.514 [2024-11-18 12:06:12.188228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.514 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.188359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.188393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.188558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.188592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.188699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.188733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.188873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.188925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.189103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.189140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.189279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.189316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.189454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.189516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.189718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.189766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.189966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.190005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.190121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.190158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.190280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.190319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.190524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.190578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.190697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.190734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.190964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.191027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.191250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.191303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.191471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.191513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.191670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.191723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.191986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.192042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.192296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.192349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.192482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.192534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.192689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.192728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.192889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.192942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.193087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.193139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.193271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.193305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.193445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.193479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.193623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.193657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.193853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.193918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.194079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.194134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.194258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.194296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.194455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.194496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.194608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.194642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.194786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.194837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.195017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.195070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.195304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.195375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.195558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.195607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.195744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.195779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.195981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.196048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.515 qpair failed and we were unable to recover it. 00:37:46.515 [2024-11-18 12:06:12.196258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.515 [2024-11-18 12:06:12.196295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-11-18 12:06:12.196450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-11-18 12:06:12.196484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-11-18 12:06:12.196606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-11-18 12:06:12.196640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-11-18 12:06:12.196739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-11-18 12:06:12.196772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-11-18 12:06:12.196936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-11-18 12:06:12.196995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-11-18 12:06:12.197167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-11-18 12:06:12.197204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-11-18 12:06:12.197368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-11-18 12:06:12.197402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-11-18 12:06:12.197556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-11-18 12:06:12.197604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-11-18 12:06:12.197723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-11-18 12:06:12.197759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-11-18 12:06:12.197864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-11-18 12:06:12.197918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-11-18 12:06:12.198094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-11-18 12:06:12.198131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-11-18 12:06:12.198314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-11-18 12:06:12.198380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-11-18 12:06:12.198523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-11-18 12:06:12.198561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-11-18 12:06:12.198743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-11-18 12:06:12.198781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-11-18 12:06:12.198910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-11-18 12:06:12.198947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-11-18 12:06:12.199105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-11-18 12:06:12.199143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-11-18 12:06:12.199286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-11-18 12:06:12.199323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-11-18 12:06:12.199466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-11-18 12:06:12.199538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-11-18 12:06:12.199671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-11-18 12:06:12.199705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-11-18 12:06:12.199859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-11-18 12:06:12.199896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-11-18 12:06:12.200034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-11-18 12:06:12.200071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-11-18 12:06:12.200217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-11-18 12:06:12.200254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-11-18 12:06:12.200387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-11-18 12:06:12.200423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-11-18 12:06:12.200560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-11-18 12:06:12.200609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-11-18 12:06:12.200740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-11-18 12:06:12.200787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-11-18 12:06:12.200929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-11-18 12:06:12.200968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-11-18 12:06:12.201158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-11-18 12:06:12.201195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-11-18 12:06:12.201338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-11-18 12:06:12.201375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-11-18 12:06:12.201529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-11-18 12:06:12.201578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-11-18 12:06:12.201702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-11-18 12:06:12.201740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-11-18 12:06:12.201856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-11-18 12:06:12.201893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-11-18 12:06:12.202008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-11-18 12:06:12.202044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-11-18 12:06:12.202233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-11-18 12:06:12.202298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.516 qpair failed and we were unable to recover it. 00:37:46.516 [2024-11-18 12:06:12.202485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.516 [2024-11-18 12:06:12.202541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.202723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-11-18 12:06:12.202775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.202935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-11-18 12:06:12.202974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.203207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-11-18 12:06:12.203267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.203440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-11-18 12:06:12.203476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.203619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-11-18 12:06:12.203653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.203806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-11-18 12:06:12.203843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.203959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-11-18 12:06:12.203997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.204200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-11-18 12:06:12.204264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.204431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-11-18 12:06:12.204469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.204632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-11-18 12:06:12.204680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.204897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-11-18 12:06:12.204951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.205160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-11-18 12:06:12.205232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.205439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-11-18 12:06:12.205477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.205611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-11-18 12:06:12.205645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.205797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-11-18 12:06:12.205834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.206007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-11-18 12:06:12.206084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.206231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-11-18 12:06:12.206269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.206403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-11-18 12:06:12.206457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.206675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-11-18 12:06:12.206723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.206914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-11-18 12:06:12.206968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.207114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-11-18 12:06:12.207166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.207418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-11-18 12:06:12.207474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.207630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-11-18 12:06:12.207684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.207957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-11-18 12:06:12.208015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.208215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-11-18 12:06:12.208275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.208428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-11-18 12:06:12.208463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.208616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-11-18 12:06:12.208651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.208827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-11-18 12:06:12.208879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.209140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-11-18 12:06:12.209205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.209422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-11-18 12:06:12.209482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.209632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-11-18 12:06:12.209667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.209835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-11-18 12:06:12.209900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.210069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-11-18 12:06:12.210128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.210353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-11-18 12:06:12.210388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.210562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-11-18 12:06:12.210597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.210736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-11-18 12:06:12.210786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.210915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.517 [2024-11-18 12:06:12.210965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.517 qpair failed and we were unable to recover it. 00:37:46.517 [2024-11-18 12:06:12.211158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.211194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-11-18 12:06:12.211355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.211388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-11-18 12:06:12.211550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.211584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-11-18 12:06:12.211706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.211754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-11-18 12:06:12.211893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.211932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-11-18 12:06:12.212102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.212140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-11-18 12:06:12.212262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.212299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-11-18 12:06:12.212462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.212542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-11-18 12:06:12.212661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.212696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-11-18 12:06:12.212836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.212908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-11-18 12:06:12.213099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.213164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-11-18 12:06:12.213369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.213425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-11-18 12:06:12.213552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.213604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-11-18 12:06:12.213735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.213783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-11-18 12:06:12.213958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.214023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-11-18 12:06:12.214233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.214292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-11-18 12:06:12.214437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.214472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-11-18 12:06:12.214644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.214678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-11-18 12:06:12.214840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.214893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-11-18 12:06:12.215032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.215066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-11-18 12:06:12.215237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.215274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-11-18 12:06:12.215412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.215447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-11-18 12:06:12.215607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.215655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-11-18 12:06:12.215800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.215835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-11-18 12:06:12.215978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.216013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-11-18 12:06:12.216148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.216183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-11-18 12:06:12.216345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.216380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-11-18 12:06:12.216503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.216552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-11-18 12:06:12.216719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.216755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-11-18 12:06:12.216940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.216988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-11-18 12:06:12.217122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.217175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-11-18 12:06:12.217338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.217373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-11-18 12:06:12.217483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.217525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-11-18 12:06:12.217685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.217719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-11-18 12:06:12.217853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.217887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-11-18 12:06:12.218135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.218195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-11-18 12:06:12.218364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.218398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.518 [2024-11-18 12:06:12.218545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.518 [2024-11-18 12:06:12.218579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.518 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.218736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.218773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.218925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.218963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.219181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.219235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.219366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.219401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.219590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.219640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.219785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.219838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.220041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.220098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.220363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.220419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.220618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.220653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.220796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.220830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.220956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.220997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.221201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.221259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.221368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.221408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.221567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.221620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.221728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.221762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.221950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.221998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.222138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.222173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.222291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.222339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.222485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.222535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.222646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.222679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.222786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.222820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.222958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.222992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.223127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.223161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.223271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.223306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.223483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.223528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.223651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.223699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.223878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.223932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.224040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.224074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.224206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.224240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.224346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.224380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.224557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.224610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.224737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.224784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.224904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.224942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.225104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.225138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.225250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.225291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.225426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.225461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.225655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.225693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.519 [2024-11-18 12:06:12.225951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.519 [2024-11-18 12:06:12.226009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.519 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.226121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-11-18 12:06:12.226155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.226321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-11-18 12:06:12.226355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.226495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-11-18 12:06:12.226530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.226660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-11-18 12:06:12.226712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.226881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-11-18 12:06:12.226937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.227149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-11-18 12:06:12.227184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.227322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-11-18 12:06:12.227356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.227463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-11-18 12:06:12.227502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.227667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-11-18 12:06:12.227700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.227841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-11-18 12:06:12.227875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.228062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-11-18 12:06:12.228110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.228270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-11-18 12:06:12.228318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.228465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-11-18 12:06:12.228508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.228627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-11-18 12:06:12.228661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.228827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-11-18 12:06:12.228867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.229039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-11-18 12:06:12.229074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.229296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-11-18 12:06:12.229331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.229463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-11-18 12:06:12.229509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.229621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-11-18 12:06:12.229656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.229818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-11-18 12:06:12.229852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.229986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-11-18 12:06:12.230019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.230174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-11-18 12:06:12.230209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.230314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-11-18 12:06:12.230350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.230531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-11-18 12:06:12.230580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.230712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-11-18 12:06:12.230760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.230925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-11-18 12:06:12.230965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.231104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-11-18 12:06:12.231174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.231295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-11-18 12:06:12.231332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.231500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-11-18 12:06:12.231535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.231638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-11-18 12:06:12.231672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.231875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-11-18 12:06:12.231927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.232066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-11-18 12:06:12.232120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.232254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-11-18 12:06:12.232288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.232445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-11-18 12:06:12.232482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.232619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-11-18 12:06:12.232652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.232792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-11-18 12:06:12.232844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.233022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.520 [2024-11-18 12:06:12.233060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.520 qpair failed and we were unable to recover it. 00:37:46.520 [2024-11-18 12:06:12.233260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-11-18 12:06:12.233297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-11-18 12:06:12.233413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-11-18 12:06:12.233452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-11-18 12:06:12.233616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-11-18 12:06:12.233651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-11-18 12:06:12.233783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-11-18 12:06:12.233817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-11-18 12:06:12.233926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-11-18 12:06:12.233976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-11-18 12:06:12.234122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-11-18 12:06:12.234160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-11-18 12:06:12.234339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-11-18 12:06:12.234376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-11-18 12:06:12.234552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-11-18 12:06:12.234601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-11-18 12:06:12.234746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-11-18 12:06:12.234799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-11-18 12:06:12.234998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-11-18 12:06:12.235036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-11-18 12:06:12.235166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-11-18 12:06:12.235199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-11-18 12:06:12.235326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-11-18 12:06:12.235363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-11-18 12:06:12.235522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-11-18 12:06:12.235557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-11-18 12:06:12.235672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-11-18 12:06:12.235720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-11-18 12:06:12.235907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-11-18 12:06:12.235943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-11-18 12:06:12.236194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-11-18 12:06:12.236232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-11-18 12:06:12.236403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-11-18 12:06:12.236441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-11-18 12:06:12.236606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-11-18 12:06:12.236646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-11-18 12:06:12.236781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-11-18 12:06:12.236815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-11-18 12:06:12.237001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-11-18 12:06:12.237053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-11-18 12:06:12.237217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-11-18 12:06:12.237254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-11-18 12:06:12.237409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-11-18 12:06:12.237443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-11-18 12:06:12.237591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-11-18 12:06:12.237625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-11-18 12:06:12.237730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-11-18 12:06:12.237764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-11-18 12:06:12.237957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-11-18 12:06:12.237994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-11-18 12:06:12.238196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-11-18 12:06:12.238251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-11-18 12:06:12.238426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-11-18 12:06:12.238463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-11-18 12:06:12.238599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-11-18 12:06:12.238633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-11-18 12:06:12.238748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-11-18 12:06:12.238781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-11-18 12:06:12.238969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-11-18 12:06:12.239049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-11-18 12:06:12.239219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-11-18 12:06:12.239256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-11-18 12:06:12.239438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-11-18 12:06:12.239475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-11-18 12:06:12.239662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-11-18 12:06:12.239695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.521 qpair failed and we were unable to recover it. 00:37:46.521 [2024-11-18 12:06:12.239813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.521 [2024-11-18 12:06:12.239874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.240029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-11-18 12:06:12.240088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.240228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-11-18 12:06:12.240290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.240422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-11-18 12:06:12.240455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.240599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-11-18 12:06:12.240634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.240828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-11-18 12:06:12.240877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.240994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-11-18 12:06:12.241031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.241167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-11-18 12:06:12.241203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.241315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-11-18 12:06:12.241350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.241509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-11-18 12:06:12.241557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.241702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-11-18 12:06:12.241738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.242005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-11-18 12:06:12.242064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.242339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-11-18 12:06:12.242377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.242527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-11-18 12:06:12.242578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.242708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-11-18 12:06:12.242742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.242948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-11-18 12:06:12.243023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.243256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-11-18 12:06:12.243290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.243468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-11-18 12:06:12.243509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.243646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-11-18 12:06:12.243681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.243834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-11-18 12:06:12.243871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.244075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-11-18 12:06:12.244112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.244281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-11-18 12:06:12.244318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.244468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-11-18 12:06:12.244531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.244644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-11-18 12:06:12.244677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.244842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-11-18 12:06:12.244881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.245048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-11-18 12:06:12.245085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.245214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-11-18 12:06:12.245267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.245408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-11-18 12:06:12.245446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.245598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-11-18 12:06:12.245647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.245806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-11-18 12:06:12.245845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.246060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-11-18 12:06:12.246118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.246239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-11-18 12:06:12.246276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.246431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-11-18 12:06:12.246472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.246643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-11-18 12:06:12.246690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.246801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-11-18 12:06:12.246856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.247048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-11-18 12:06:12.247107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.247244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-11-18 12:06:12.247278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.247416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.522 [2024-11-18 12:06:12.247454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.522 qpair failed and we were unable to recover it. 00:37:46.522 [2024-11-18 12:06:12.247634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.247682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-11-18 12:06:12.247882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.247935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-11-18 12:06:12.248123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.248181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-11-18 12:06:12.248343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.248378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-11-18 12:06:12.248513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.248547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-11-18 12:06:12.248723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.248777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-11-18 12:06:12.248951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.248989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-11-18 12:06:12.249269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.249337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-11-18 12:06:12.249506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.249542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-11-18 12:06:12.249673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.249708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-11-18 12:06:12.249881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.249934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-11-18 12:06:12.250123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.250176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-11-18 12:06:12.250332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.250383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-11-18 12:06:12.250506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.250541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-11-18 12:06:12.250686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.250721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-11-18 12:06:12.250887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.250921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-11-18 12:06:12.251081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.251149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-11-18 12:06:12.251292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.251343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-11-18 12:06:12.251485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.251528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-11-18 12:06:12.251655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.251693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-11-18 12:06:12.251870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.251923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-11-18 12:06:12.252098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.252134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-11-18 12:06:12.252357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.252411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-11-18 12:06:12.252541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.252576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-11-18 12:06:12.252755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.252809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-11-18 12:06:12.252991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.253030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-11-18 12:06:12.253232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.253308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-11-18 12:06:12.253470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.253513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-11-18 12:06:12.253667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.253701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-11-18 12:06:12.253808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.253843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-11-18 12:06:12.254025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.254062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-11-18 12:06:12.254300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.254338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-11-18 12:06:12.254483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.254543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-11-18 12:06:12.254677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.254713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-11-18 12:06:12.254884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.254944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-11-18 12:06:12.255098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.255138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-11-18 12:06:12.255276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.255314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.523 [2024-11-18 12:06:12.255474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.523 [2024-11-18 12:06:12.255513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.523 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.255613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.255647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.524 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.255785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.255821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.524 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.255987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.256025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.524 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.256218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.256291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.524 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.256447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.256485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.524 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.256662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.256698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.524 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.256820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.256873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.524 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.257056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.257110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.524 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.257273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.257307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.524 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.257470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.257513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.524 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.257658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.257693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.524 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.257927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.257988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.524 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.258187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.258248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.524 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.258383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.258417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.524 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.258553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.258588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.524 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.258739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.258787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.524 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.258931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.258966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.524 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.259102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.259139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.524 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.259370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.259432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.524 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.259588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.259622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.524 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.259735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.259768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.524 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.259901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.259935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.524 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.260054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.260089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.524 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.260289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.260353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.524 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.260520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.260556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.524 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.260721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.260791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.524 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.261007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.261067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.524 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.261198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.261251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.524 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.261427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.261471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.524 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.261641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.261676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.524 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.261816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.261850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.524 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.262009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.262086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.524 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.262349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.262404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.524 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.262564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.262598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.524 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.262736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.262770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.524 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.262956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.262993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.524 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.263202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.263238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.524 qpair failed and we were unable to recover it. 00:37:46.524 [2024-11-18 12:06:12.263385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.524 [2024-11-18 12:06:12.263423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.525 qpair failed and we were unable to recover it. 00:37:46.525 [2024-11-18 12:06:12.263575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.525 [2024-11-18 12:06:12.263623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.525 qpair failed and we were unable to recover it. 00:37:46.525 [2024-11-18 12:06:12.263784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.525 [2024-11-18 12:06:12.263832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.525 qpair failed and we were unable to recover it. 00:37:46.525 [2024-11-18 12:06:12.263992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.525 [2024-11-18 12:06:12.264046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.525 qpair failed and we were unable to recover it. 00:37:46.525 [2024-11-18 12:06:12.264205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.525 [2024-11-18 12:06:12.264258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.525 qpair failed and we were unable to recover it. 00:37:46.525 [2024-11-18 12:06:12.264366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.525 [2024-11-18 12:06:12.264400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.525 qpair failed and we were unable to recover it. 00:37:46.525 [2024-11-18 12:06:12.264557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.525 [2024-11-18 12:06:12.264606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.525 qpair failed and we were unable to recover it. 00:37:46.525 [2024-11-18 12:06:12.264756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.525 [2024-11-18 12:06:12.264790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.525 qpair failed and we were unable to recover it. 00:37:46.525 [2024-11-18 12:06:12.264894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.525 [2024-11-18 12:06:12.264946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.525 qpair failed and we were unable to recover it. 00:37:46.525 [2024-11-18 12:06:12.265087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.525 [2024-11-18 12:06:12.265124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.525 qpair failed and we were unable to recover it. 00:37:46.525 [2024-11-18 12:06:12.265266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.525 [2024-11-18 12:06:12.265303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.525 qpair failed and we were unable to recover it. 00:37:46.525 [2024-11-18 12:06:12.265452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.525 [2024-11-18 12:06:12.265501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.525 qpair failed and we were unable to recover it. 00:37:46.525 [2024-11-18 12:06:12.265653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.525 [2024-11-18 12:06:12.265691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.525 qpair failed and we were unable to recover it. 00:37:46.525 [2024-11-18 12:06:12.265871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.525 [2024-11-18 12:06:12.265908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.525 qpair failed and we were unable to recover it. 00:37:46.525 [2024-11-18 12:06:12.266028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.525 [2024-11-18 12:06:12.266064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.525 qpair failed and we were unable to recover it. 00:37:46.525 [2024-11-18 12:06:12.266252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.525 [2024-11-18 12:06:12.266303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.525 qpair failed and we were unable to recover it. 00:37:46.525 [2024-11-18 12:06:12.266446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.525 [2024-11-18 12:06:12.266485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.525 qpair failed and we were unable to recover it. 00:37:46.525 [2024-11-18 12:06:12.266650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.525 [2024-11-18 12:06:12.266689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.525 qpair failed and we were unable to recover it. 00:37:46.525 [2024-11-18 12:06:12.266888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.525 [2024-11-18 12:06:12.266943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.525 qpair failed and we were unable to recover it. 00:37:46.525 [2024-11-18 12:06:12.267107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.525 [2024-11-18 12:06:12.267179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.525 qpair failed and we were unable to recover it. 00:37:46.525 [2024-11-18 12:06:12.267325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.525 [2024-11-18 12:06:12.267374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.525 qpair failed and we were unable to recover it. 00:37:46.525 [2024-11-18 12:06:12.267506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.525 [2024-11-18 12:06:12.267540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.525 qpair failed and we were unable to recover it. 00:37:46.525 [2024-11-18 12:06:12.267675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.525 [2024-11-18 12:06:12.267709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.525 qpair failed and we were unable to recover it. 00:37:46.525 [2024-11-18 12:06:12.267860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.525 [2024-11-18 12:06:12.267897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.525 qpair failed and we were unable to recover it. 00:37:46.525 [2024-11-18 12:06:12.268008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.525 [2024-11-18 12:06:12.268045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.525 qpair failed and we were unable to recover it. 00:37:46.525 [2024-11-18 12:06:12.268234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.525 [2024-11-18 12:06:12.268272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.525 qpair failed and we were unable to recover it. 00:37:46.525 [2024-11-18 12:06:12.268413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.525 [2024-11-18 12:06:12.268449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.525 qpair failed and we were unable to recover it. 00:37:46.525 [2024-11-18 12:06:12.268635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.525 [2024-11-18 12:06:12.268683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.525 qpair failed and we were unable to recover it. 00:37:46.525 [2024-11-18 12:06:12.268814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.525 [2024-11-18 12:06:12.268862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.525 qpair failed and we were unable to recover it. 00:37:46.525 [2024-11-18 12:06:12.269053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.525 [2024-11-18 12:06:12.269108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.525 qpair failed and we were unable to recover it. 00:37:46.525 [2024-11-18 12:06:12.269293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.525 [2024-11-18 12:06:12.269345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.525 qpair failed and we were unable to recover it. 00:37:46.525 [2024-11-18 12:06:12.269482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.525 [2024-11-18 12:06:12.269534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.525 qpair failed and we were unable to recover it. 00:37:46.525 [2024-11-18 12:06:12.269687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.525 [2024-11-18 12:06:12.269740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.525 qpair failed and we were unable to recover it. 00:37:46.525 [2024-11-18 12:06:12.269927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.269980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.270099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.270152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.270325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.270373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.270516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.270564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.270709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.270744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.270896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.270948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.271149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.271206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.271353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.271390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.271543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.271578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.271714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.271748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.271934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.271971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.272159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.272196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.272373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.272411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.272567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.272601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.272750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.272799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.273052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.273125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.273357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.273431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.273563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.273616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.273762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.273796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.273957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.274026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.274239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.274310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.274436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.274470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.274631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.274679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.274930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.274965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.275167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.275265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.275441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.275479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.275653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.275700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.275817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.275854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.275989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.276025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.276159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.276193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.276363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.276430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.276626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.276689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.276842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.276895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.277117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.277172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.277389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.277424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.277591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.277626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.277768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.277803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.278069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.526 [2024-11-18 12:06:12.278125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.526 qpair failed and we were unable to recover it. 00:37:46.526 [2024-11-18 12:06:12.278388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.278451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.527 qpair failed and we were unable to recover it. 00:37:46.527 [2024-11-18 12:06:12.278623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.278657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.527 qpair failed and we were unable to recover it. 00:37:46.527 [2024-11-18 12:06:12.278760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.278794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.527 qpair failed and we were unable to recover it. 00:37:46.527 [2024-11-18 12:06:12.278941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.278975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.527 qpair failed and we were unable to recover it. 00:37:46.527 [2024-11-18 12:06:12.279109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.279144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.527 qpair failed and we were unable to recover it. 00:37:46.527 [2024-11-18 12:06:12.279266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.279304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.527 qpair failed and we were unable to recover it. 00:37:46.527 [2024-11-18 12:06:12.279483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.279557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.527 qpair failed and we were unable to recover it. 00:37:46.527 [2024-11-18 12:06:12.279698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.279733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.527 qpair failed and we were unable to recover it. 00:37:46.527 [2024-11-18 12:06:12.279882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.279935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.527 qpair failed and we were unable to recover it. 00:37:46.527 [2024-11-18 12:06:12.280077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.280114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.527 qpair failed and we were unable to recover it. 00:37:46.527 [2024-11-18 12:06:12.280288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.280363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.527 qpair failed and we were unable to recover it. 00:37:46.527 [2024-11-18 12:06:12.280536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.280584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.527 qpair failed and we were unable to recover it. 00:37:46.527 [2024-11-18 12:06:12.280715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.280754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.527 qpair failed and we were unable to recover it. 00:37:46.527 [2024-11-18 12:06:12.280872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.280910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.527 qpair failed and we were unable to recover it. 00:37:46.527 [2024-11-18 12:06:12.281088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.281127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.527 qpair failed and we were unable to recover it. 00:37:46.527 [2024-11-18 12:06:12.281254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.281288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.527 qpair failed and we were unable to recover it. 00:37:46.527 [2024-11-18 12:06:12.281452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.281498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.527 qpair failed and we were unable to recover it. 00:37:46.527 [2024-11-18 12:06:12.281647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.281682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.527 qpair failed and we were unable to recover it. 00:37:46.527 [2024-11-18 12:06:12.281818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.281855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.527 qpair failed and we were unable to recover it. 00:37:46.527 [2024-11-18 12:06:12.282002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.282039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.527 qpair failed and we were unable to recover it. 00:37:46.527 [2024-11-18 12:06:12.282186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.282226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.527 qpair failed and we were unable to recover it. 00:37:46.527 [2024-11-18 12:06:12.282384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.282422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.527 qpair failed and we were unable to recover it. 00:37:46.527 [2024-11-18 12:06:12.282558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.282606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.527 qpair failed and we were unable to recover it. 00:37:46.527 [2024-11-18 12:06:12.282727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.282794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.527 qpair failed and we were unable to recover it. 00:37:46.527 [2024-11-18 12:06:12.282928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.282968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.527 qpair failed and we were unable to recover it. 00:37:46.527 [2024-11-18 12:06:12.283119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.283157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.527 qpair failed and we were unable to recover it. 00:37:46.527 [2024-11-18 12:06:12.283300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.283337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.527 qpair failed and we were unable to recover it. 00:37:46.527 [2024-11-18 12:06:12.283456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.283504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.527 qpair failed and we were unable to recover it. 00:37:46.527 [2024-11-18 12:06:12.283687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.283722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.527 qpair failed and we were unable to recover it. 00:37:46.527 [2024-11-18 12:06:12.283839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.283878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.527 qpair failed and we were unable to recover it. 00:37:46.527 [2024-11-18 12:06:12.284029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.284067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.527 qpair failed and we were unable to recover it. 00:37:46.527 [2024-11-18 12:06:12.284186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.284224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.527 qpair failed and we were unable to recover it. 00:37:46.527 [2024-11-18 12:06:12.284369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.284407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.527 qpair failed and we were unable to recover it. 00:37:46.527 [2024-11-18 12:06:12.284563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.284598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.527 qpair failed and we were unable to recover it. 00:37:46.527 [2024-11-18 12:06:12.284761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.284817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.527 qpair failed and we were unable to recover it. 00:37:46.527 [2024-11-18 12:06:12.284974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.285028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.527 qpair failed and we were unable to recover it. 00:37:46.527 [2024-11-18 12:06:12.285184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.285224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.527 qpair failed and we were unable to recover it. 00:37:46.527 [2024-11-18 12:06:12.285371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.527 [2024-11-18 12:06:12.285409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.285581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.528 [2024-11-18 12:06:12.285617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.285796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.528 [2024-11-18 12:06:12.285849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.286081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.528 [2024-11-18 12:06:12.286145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.286320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.528 [2024-11-18 12:06:12.286379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.286538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.528 [2024-11-18 12:06:12.286572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.286712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.528 [2024-11-18 12:06:12.286746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.286909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.528 [2024-11-18 12:06:12.286947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.287145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.528 [2024-11-18 12:06:12.287182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.287341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.528 [2024-11-18 12:06:12.287394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.287536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.528 [2024-11-18 12:06:12.287588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.287702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.528 [2024-11-18 12:06:12.287736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.287923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.528 [2024-11-18 12:06:12.287959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.288079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.528 [2024-11-18 12:06:12.288116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.288314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.528 [2024-11-18 12:06:12.288351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.288471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.528 [2024-11-18 12:06:12.288519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.288648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.528 [2024-11-18 12:06:12.288682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.288819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.528 [2024-11-18 12:06:12.288870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.289012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.528 [2024-11-18 12:06:12.289049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.289227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.528 [2024-11-18 12:06:12.289264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.289406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.528 [2024-11-18 12:06:12.289443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.289580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.528 [2024-11-18 12:06:12.289614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.289774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.528 [2024-11-18 12:06:12.289808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.289966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.528 [2024-11-18 12:06:12.290003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.290150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.528 [2024-11-18 12:06:12.290188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.290314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.528 [2024-11-18 12:06:12.290351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.290537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.528 [2024-11-18 12:06:12.290571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.290704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.528 [2024-11-18 12:06:12.290738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.290929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.528 [2024-11-18 12:06:12.290966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.291092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.528 [2024-11-18 12:06:12.291143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.291288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.528 [2024-11-18 12:06:12.291326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.291479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.528 [2024-11-18 12:06:12.291542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.291679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.528 [2024-11-18 12:06:12.291713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.291888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.528 [2024-11-18 12:06:12.291925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.292071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.528 [2024-11-18 12:06:12.292107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.292233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.528 [2024-11-18 12:06:12.292286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.292428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.528 [2024-11-18 12:06:12.292466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.292606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.528 [2024-11-18 12:06:12.292640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.528 qpair failed and we were unable to recover it. 00:37:46.528 [2024-11-18 12:06:12.292797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.292831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.529 [2024-11-18 12:06:12.292982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.293019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.529 [2024-11-18 12:06:12.293175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.293212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.529 [2024-11-18 12:06:12.293346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.293379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.529 [2024-11-18 12:06:12.293488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.293536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.529 [2024-11-18 12:06:12.293694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.293732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.529 [2024-11-18 12:06:12.293958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.293996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.529 [2024-11-18 12:06:12.294110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.294147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.529 [2024-11-18 12:06:12.294282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.294319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.529 [2024-11-18 12:06:12.294441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.294475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.529 [2024-11-18 12:06:12.294680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.294728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.529 [2024-11-18 12:06:12.294860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.294900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.529 [2024-11-18 12:06:12.295041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.295079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.529 [2024-11-18 12:06:12.295255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.295292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.529 [2024-11-18 12:06:12.295457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.295503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.529 [2024-11-18 12:06:12.295678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.295727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.529 [2024-11-18 12:06:12.295868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.295907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.529 [2024-11-18 12:06:12.296084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.296120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.529 [2024-11-18 12:06:12.296232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.296269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.529 [2024-11-18 12:06:12.296392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.296425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.529 [2024-11-18 12:06:12.296580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.296628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.529 [2024-11-18 12:06:12.296797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.296852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.529 [2024-11-18 12:06:12.297052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.297116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.529 [2024-11-18 12:06:12.297341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.297380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.529 [2024-11-18 12:06:12.297554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.297589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.529 [2024-11-18 12:06:12.297752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.297790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.529 [2024-11-18 12:06:12.297932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.297970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.529 [2024-11-18 12:06:12.298119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.298157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.529 [2024-11-18 12:06:12.298340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.298397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.529 [2024-11-18 12:06:12.298547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.298595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.529 [2024-11-18 12:06:12.298751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.298819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.529 [2024-11-18 12:06:12.299005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.299044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.529 [2024-11-18 12:06:12.299182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.299236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.529 [2024-11-18 12:06:12.299384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.299423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.529 [2024-11-18 12:06:12.299578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.299614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.529 [2024-11-18 12:06:12.299796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.299863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.529 [2024-11-18 12:06:12.300012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.300049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.529 [2024-11-18 12:06:12.300187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.529 [2024-11-18 12:06:12.300223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.529 qpair failed and we were unable to recover it. 00:37:46.530 [2024-11-18 12:06:12.300357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.530 [2024-11-18 12:06:12.300392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.530 qpair failed and we were unable to recover it. 00:37:46.530 [2024-11-18 12:06:12.300563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.530 [2024-11-18 12:06:12.300598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.530 qpair failed and we were unable to recover it. 00:37:46.530 [2024-11-18 12:06:12.300698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.530 [2024-11-18 12:06:12.300732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.530 qpair failed and we were unable to recover it. 00:37:46.530 [2024-11-18 12:06:12.300951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.530 [2024-11-18 12:06:12.300987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.530 qpair failed and we were unable to recover it. 00:37:46.530 [2024-11-18 12:06:12.301153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.530 [2024-11-18 12:06:12.301188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.530 qpair failed and we were unable to recover it. 00:37:46.530 [2024-11-18 12:06:12.301321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.530 [2024-11-18 12:06:12.301354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.530 qpair failed and we were unable to recover it. 00:37:46.530 [2024-11-18 12:06:12.301486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.530 [2024-11-18 12:06:12.301538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.530 qpair failed and we were unable to recover it. 00:37:46.530 [2024-11-18 12:06:12.301696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.530 [2024-11-18 12:06:12.301749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.530 qpair failed and we were unable to recover it. 00:37:46.530 [2024-11-18 12:06:12.301912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.530 [2024-11-18 12:06:12.301952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.530 qpair failed and we were unable to recover it. 00:37:46.530 [2024-11-18 12:06:12.302135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.530 [2024-11-18 12:06:12.302200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.530 qpair failed and we were unable to recover it. 00:37:46.530 [2024-11-18 12:06:12.302346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.530 [2024-11-18 12:06:12.302383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.530 qpair failed and we were unable to recover it. 00:37:46.530 [2024-11-18 12:06:12.302528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.530 [2024-11-18 12:06:12.302579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.530 qpair failed and we were unable to recover it. 00:37:46.530 [2024-11-18 12:06:12.302737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.530 [2024-11-18 12:06:12.302791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.530 qpair failed and we were unable to recover it. 00:37:46.530 [2024-11-18 12:06:12.302951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.530 [2024-11-18 12:06:12.302990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.530 qpair failed and we were unable to recover it. 00:37:46.530 [2024-11-18 12:06:12.303144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.530 [2024-11-18 12:06:12.303182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.530 qpair failed and we were unable to recover it. 00:37:46.530 [2024-11-18 12:06:12.303320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.530 [2024-11-18 12:06:12.303357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.530 qpair failed and we were unable to recover it. 00:37:46.530 [2024-11-18 12:06:12.303474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.530 [2024-11-18 12:06:12.303534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.530 qpair failed and we were unable to recover it. 00:37:46.530 [2024-11-18 12:06:12.303640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.530 [2024-11-18 12:06:12.303674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.530 qpair failed and we were unable to recover it. 00:37:46.530 [2024-11-18 12:06:12.303840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.530 [2024-11-18 12:06:12.303893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.530 qpair failed and we were unable to recover it. 00:37:46.530 [2024-11-18 12:06:12.304026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.530 [2024-11-18 12:06:12.304077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.530 qpair failed and we were unable to recover it. 00:37:46.530 [2024-11-18 12:06:12.304195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.530 [2024-11-18 12:06:12.304232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.530 qpair failed and we were unable to recover it. 00:37:46.530 [2024-11-18 12:06:12.304385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.530 [2024-11-18 12:06:12.304435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.530 qpair failed and we were unable to recover it. 00:37:46.530 [2024-11-18 12:06:12.304575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.530 [2024-11-18 12:06:12.304609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.530 qpair failed and we were unable to recover it. 00:37:46.530 [2024-11-18 12:06:12.304708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.530 [2024-11-18 12:06:12.304741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.530 qpair failed and we were unable to recover it. 00:37:46.530 [2024-11-18 12:06:12.304936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.530 [2024-11-18 12:06:12.305007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.530 qpair failed and we were unable to recover it. 00:37:46.530 [2024-11-18 12:06:12.305271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.530 [2024-11-18 12:06:12.305330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.530 qpair failed and we were unable to recover it. 00:37:46.530 [2024-11-18 12:06:12.305448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.530 [2024-11-18 12:06:12.305485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.530 qpair failed and we were unable to recover it. 00:37:46.530 [2024-11-18 12:06:12.305632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.530 [2024-11-18 12:06:12.305668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.530 qpair failed and we were unable to recover it. 00:37:46.530 [2024-11-18 12:06:12.305781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.530 [2024-11-18 12:06:12.305814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.530 qpair failed and we were unable to recover it. 00:37:46.530 [2024-11-18 12:06:12.305951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.530 [2024-11-18 12:06:12.305985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.530 qpair failed and we were unable to recover it. 00:37:46.530 [2024-11-18 12:06:12.306152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.530 [2024-11-18 12:06:12.306187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.530 qpair failed and we were unable to recover it. 00:37:46.530 [2024-11-18 12:06:12.306351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.530 [2024-11-18 12:06:12.306389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.530 qpair failed and we were unable to recover it. 00:37:46.530 [2024-11-18 12:06:12.306562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.530 [2024-11-18 12:06:12.306596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.530 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.306733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.531 [2024-11-18 12:06:12.306768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.531 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.306931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.531 [2024-11-18 12:06:12.306974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.531 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.307121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.531 [2024-11-18 12:06:12.307158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.531 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.307309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.531 [2024-11-18 12:06:12.307348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.531 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.307504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.531 [2024-11-18 12:06:12.307539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.531 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.307671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.531 [2024-11-18 12:06:12.307705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.531 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.307840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.531 [2024-11-18 12:06:12.307874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.531 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.308039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.531 [2024-11-18 12:06:12.308106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.531 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.308269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.531 [2024-11-18 12:06:12.308323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.531 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.308457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.531 [2024-11-18 12:06:12.308499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.531 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.308664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.531 [2024-11-18 12:06:12.308699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.531 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.308801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.531 [2024-11-18 12:06:12.308834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.531 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.308993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.531 [2024-11-18 12:06:12.309030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.531 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.309237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.531 [2024-11-18 12:06:12.309301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.531 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.309446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.531 [2024-11-18 12:06:12.309483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.531 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.309661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.531 [2024-11-18 12:06:12.309694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.531 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.309843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.531 [2024-11-18 12:06:12.309877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.531 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.310082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.531 [2024-11-18 12:06:12.310144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.531 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.310373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.531 [2024-11-18 12:06:12.310411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.531 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.310598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.531 [2024-11-18 12:06:12.310634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.531 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.310743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.531 [2024-11-18 12:06:12.310777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.531 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.310935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.531 [2024-11-18 12:06:12.310972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.531 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.311236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.531 [2024-11-18 12:06:12.311293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.531 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.311448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.531 [2024-11-18 12:06:12.311486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.531 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.311635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.531 [2024-11-18 12:06:12.311683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.531 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.311866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.531 [2024-11-18 12:06:12.311918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.531 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.312047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.531 [2024-11-18 12:06:12.312087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.531 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.312231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.531 [2024-11-18 12:06:12.312266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.531 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.312437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.531 [2024-11-18 12:06:12.312475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.531 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.312626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.531 [2024-11-18 12:06:12.312663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.531 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.312768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.531 [2024-11-18 12:06:12.312806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.531 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.312925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.531 [2024-11-18 12:06:12.312962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.531 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.313068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.531 [2024-11-18 12:06:12.313105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.531 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.313254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.531 [2024-11-18 12:06:12.313293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.531 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.313483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.531 [2024-11-18 12:06:12.313528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.531 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.313666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.531 [2024-11-18 12:06:12.313701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.531 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.313849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.531 [2024-11-18 12:06:12.313908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.531 qpair failed and we were unable to recover it. 00:37:46.531 [2024-11-18 12:06:12.314063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.314115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.314256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.314320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.314462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.314506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.314613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.314647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.314804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.314847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.314990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.315028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.315137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.315175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.315333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.315369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.315479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.315521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.315675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.315726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.315908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.315959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.316107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.316158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.316323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.316357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.316465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.316508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.316670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.316717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.316843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.316883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.317142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.317199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.317358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.317392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.317549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.317585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.317742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.317795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.317982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.318036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.318230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.318302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.318433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.318467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.318630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.318682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.318839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.318887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.319023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.319059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.319188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.319223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.319354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.319388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.319549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.319597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.319784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.319832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.319973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.320012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.320176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.320237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.320342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.320377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.320557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.320610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.320731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.320779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.320946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.320981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.321115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.321149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.321287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-18 12:06:12.321320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-18 12:06:12.321449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.321483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-18 12:06:12.321622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.321670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-18 12:06:12.321863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.321916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-18 12:06:12.322091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.322144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-18 12:06:12.322289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.322324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-18 12:06:12.322484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.322523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-18 12:06:12.322647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.322691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-18 12:06:12.322838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.322876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-18 12:06:12.322986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.323023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-18 12:06:12.323199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.323253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-18 12:06:12.323396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.323433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-18 12:06:12.323633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.323681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-18 12:06:12.323810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.323848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-18 12:06:12.323977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.324053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-18 12:06:12.324199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.324237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-18 12:06:12.324357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.324394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-18 12:06:12.324583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.324617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-18 12:06:12.324750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.324801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-18 12:06:12.324980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.325017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-18 12:06:12.325221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.325258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-18 12:06:12.325372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.325409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-18 12:06:12.325567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.325615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-18 12:06:12.325741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.325776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-18 12:06:12.325999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.326063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-18 12:06:12.326232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.326309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-18 12:06:12.326471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.326524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-18 12:06:12.326703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.326751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-18 12:06:12.326974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.327036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-18 12:06:12.327165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.327229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-18 12:06:12.327382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.327419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-18 12:06:12.327629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.327663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-18 12:06:12.327768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.327802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-18 12:06:12.327912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.327946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-18 12:06:12.328217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.328266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-18 12:06:12.328412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.328465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-18 12:06:12.328627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.328661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-18 12:06:12.328786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.328853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-18 12:06:12.329071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-18 12:06:12.329110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.329270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-18 12:06:12.329308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.329431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-18 12:06:12.329470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.329624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-18 12:06:12.329672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.329852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-18 12:06:12.329905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.330118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-18 12:06:12.330178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.330358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-18 12:06:12.330397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.330529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-18 12:06:12.330565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.330782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-18 12:06:12.330817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.330945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-18 12:06:12.330989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.331249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-18 12:06:12.331307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.331483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-18 12:06:12.331547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.331655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-18 12:06:12.331689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.331848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-18 12:06:12.331905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.332088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-18 12:06:12.332140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.332387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-18 12:06:12.332435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.332611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-18 12:06:12.332647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.332800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-18 12:06:12.332838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.332959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-18 12:06:12.333010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.333152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-18 12:06:12.333189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.333353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-18 12:06:12.333406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.333591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-18 12:06:12.333639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.333756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-18 12:06:12.333792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.333909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-18 12:06:12.333944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.334105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-18 12:06:12.334140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.334326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-18 12:06:12.334364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.334486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-18 12:06:12.334548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.334682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-18 12:06:12.334730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.334898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-18 12:06:12.334956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.335083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-18 12:06:12.335123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.335263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-18 12:06:12.335301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.335474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-18 12:06:12.335515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.335613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-18 12:06:12.335647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.335806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-18 12:06:12.335855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.335975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-18 12:06:12.336029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.336227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-18 12:06:12.336308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.336484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-18 12:06:12.336545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.336725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-18 12:06:12.336772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-18 12:06:12.336982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-18 12:06:12.337040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-18 12:06:12.337241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-18 12:06:12.337299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-18 12:06:12.337434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-18 12:06:12.337468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-18 12:06:12.337621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-18 12:06:12.337673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-18 12:06:12.337864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-18 12:06:12.337899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-18 12:06:12.338004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-18 12:06:12.338038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-18 12:06:12.338201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-18 12:06:12.338234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-18 12:06:12.338335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-18 12:06:12.338369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-18 12:06:12.338532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-18 12:06:12.338568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3141657 Killed "${NVMF_APP[@]}" "$@" 00:37:46.535 [2024-11-18 12:06:12.338717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-18 12:06:12.338764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-18 12:06:12.338910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-18 12:06:12.338946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 12:06:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:37:46.535 [2024-11-18 12:06:12.339092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-18 12:06:12.339126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 12:06:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:46.535 [2024-11-18 12:06:12.339224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-18 12:06:12.339258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 12:06:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:46.535 [2024-11-18 12:06:12.339385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 12:06:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:46.535 [2024-11-18 12:06:12.339419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 12:06:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:46.535 [2024-11-18 12:06:12.339555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-18 12:06:12.339589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-18 12:06:12.339754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-18 12:06:12.339807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-18 12:06:12.339965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-18 12:06:12.340018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-18 12:06:12.340175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-18 12:06:12.340214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-18 12:06:12.340366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-18 12:06:12.340403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-18 12:06:12.340558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-18 12:06:12.340593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-18 12:06:12.340693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-18 12:06:12.340728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-18 12:06:12.340857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-18 12:06:12.340895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-18 12:06:12.341027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-18 12:06:12.341080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-18 12:06:12.341253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-18 12:06:12.341309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-18 12:06:12.341430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-18 12:06:12.341467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-18 12:06:12.341637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-18 12:06:12.341670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-18 12:06:12.341819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-18 12:06:12.341877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-18 12:06:12.342015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-18 12:06:12.342071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-18 12:06:12.342219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-18 12:06:12.342256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-18 12:06:12.342406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-18 12:06:12.342467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-18 12:06:12.342604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-18 12:06:12.342651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-18 12:06:12.342845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-18 12:06:12.342884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-18 12:06:12.343063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-18 12:06:12.343102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-18 12:06:12.343223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-18 12:06:12.343261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-18 12:06:12.343386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 12:06:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3142836 00:37:46.536 [2024-11-18 12:06:12.343422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 12:06:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3142836 00:37:46.536 [2024-11-18 12:06:12.343553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-18 12:06:12.343589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 12:06:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:46.536 12:06:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3142836 ']' 00:37:46.536 [2024-11-18 12:06:12.343718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-18 12:06:12.343753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 12:06:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:46.536 [2024-11-18 12:06:12.343936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-18 12:06:12.343996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 12:06:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:46.536 12:06:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:46.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:46.536 [2024-11-18 12:06:12.344264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-18 12:06:12.344344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 12:06:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:46.536 [2024-11-18 12:06:12.344458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-18 12:06:12.344503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 12:06:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:46.536 [2024-11-18 12:06:12.344649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-18 12:06:12.344696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-18 12:06:12.344916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-18 12:06:12.344976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-18 12:06:12.345170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-18 12:06:12.345234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-18 12:06:12.345353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-18 12:06:12.345391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-18 12:06:12.345545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-18 12:06:12.345578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-18 12:06:12.345718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-18 12:06:12.345753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-18 12:06:12.345951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-18 12:06:12.346014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-18 12:06:12.346250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-18 12:06:12.346309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-18 12:06:12.346486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-18 12:06:12.346549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-18 12:06:12.346677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-18 12:06:12.346711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-18 12:06:12.346844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-18 12:06:12.346888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-18 12:06:12.347009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-18 12:06:12.347046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-18 12:06:12.347216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-18 12:06:12.347274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-18 12:06:12.347450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-18 12:06:12.347505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-18 12:06:12.347619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-18 12:06:12.347655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-18 12:06:12.347813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-18 12:06:12.347866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-18 12:06:12.348056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-18 12:06:12.348109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-18 12:06:12.348253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-18 12:06:12.348316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-18 12:06:12.348480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-18 12:06:12.348522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-18 12:06:12.348630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-18 12:06:12.348665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-18 12:06:12.348818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-18 12:06:12.348886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-18 12:06:12.349126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-18 12:06:12.349185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-18 12:06:12.349340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-18 12:06:12.349379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-18 12:06:12.349509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-18 12:06:12.349563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-18 12:06:12.349723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-18 12:06:12.349762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-18 12:06:12.349939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-18 12:06:12.349977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-18 12:06:12.350096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-18 12:06:12.350136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-18 12:06:12.350263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-18 12:06:12.350301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-18 12:06:12.350470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-18 12:06:12.350552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-18 12:06:12.350675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-18 12:06:12.350723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-18 12:06:12.350891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-18 12:06:12.350933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-18 12:06:12.351065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-18 12:06:12.351104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-18 12:06:12.351263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-18 12:06:12.351319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-18 12:06:12.351508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-18 12:06:12.351545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-18 12:06:12.351662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-18 12:06:12.351697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-18 12:06:12.351880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-18 12:06:12.351933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-18 12:06:12.352088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-18 12:06:12.352146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-18 12:06:12.352276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-18 12:06:12.352314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-18 12:06:12.352440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-18 12:06:12.352474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-18 12:06:12.352586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-18 12:06:12.352620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-18 12:06:12.352726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-18 12:06:12.352778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-18 12:06:12.352987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-18 12:06:12.353025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-18 12:06:12.353162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-18 12:06:12.353200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-18 12:06:12.353353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-18 12:06:12.353390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-18 12:06:12.353527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-18 12:06:12.353562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-18 12:06:12.353680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-18 12:06:12.353715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-18 12:06:12.353826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-18 12:06:12.353860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-18 12:06:12.353981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-18 12:06:12.354030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-18 12:06:12.354189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-18 12:06:12.354243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-18 12:06:12.354377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-18 12:06:12.354431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-18 12:06:12.354570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-18 12:06:12.354607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-18 12:06:12.354717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-18 12:06:12.354752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-18 12:06:12.354903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-18 12:06:12.354941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-18 12:06:12.355165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-18 12:06:12.355204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-18 12:06:12.355379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-18 12:06:12.355418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-18 12:06:12.355589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-18 12:06:12.355623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-18 12:06:12.355774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-18 12:06:12.355808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-18 12:06:12.355941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-18 12:06:12.355980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-18 12:06:12.356108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-18 12:06:12.356142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-18 12:06:12.356263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-18 12:06:12.356301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.821 [2024-11-18 12:06:12.356431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-11-18 12:06:12.356470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-11-18 12:06:12.356648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-11-18 12:06:12.356696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-11-18 12:06:12.356843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-11-18 12:06:12.356911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-11-18 12:06:12.357079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-11-18 12:06:12.357118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-11-18 12:06:12.357228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-11-18 12:06:12.357265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-11-18 12:06:12.357393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-11-18 12:06:12.357430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-11-18 12:06:12.357597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-11-18 12:06:12.357631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-11-18 12:06:12.357734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.821 [2024-11-18 12:06:12.357767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.821 qpair failed and we were unable to recover it. 00:37:46.821 [2024-11-18 12:06:12.357888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.357936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.358078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.358117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.358238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.358278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.358409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.358448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.358595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.358630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.358748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.358781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.358890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.358922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.359073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.359109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.359248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.359284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.359417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.359456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.359624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.359665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.359795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.359830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.359956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.359994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.360098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.360136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.360271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.360309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.360439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.360473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.360598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.360631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.360745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.360778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.360935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.360972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.361120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.361157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.361276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.361329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.361463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.361505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.361644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.361692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.361843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.361877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.362030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.362067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.362173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.362211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.362325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.362362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.362523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.362574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.362684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.362717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.362912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.362953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.363076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.363125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.363272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.363308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.363485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.363529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.363667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.363702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.363848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.363883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.364054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.364088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.364198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.364231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.364339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.364372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.364483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.822 [2024-11-18 12:06:12.364523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.822 qpair failed and we were unable to recover it. 00:37:46.822 [2024-11-18 12:06:12.364656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.364689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.364828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.364861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.364971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.365004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.365149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.365184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.365292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.365325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.365453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.365487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.365619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.365668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.365827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.365874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.366045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.366080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.366244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.366278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.366387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.366420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.366539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.366574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.366711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.366746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.366893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.366926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.367033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.367066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.367229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.367263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.367399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.367432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.367576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.367612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.367766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.367802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.367940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.367986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.368094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.368128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.368266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.368301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.368484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.368541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.368676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.368724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.368883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.368920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.369061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.369096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.369233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.369267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.369379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.369414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.369553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.369601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.369726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.369762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.369875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.369916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.370056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.370090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.370256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.370290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.370412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.370459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.370613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.370648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.370829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.370877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.371020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.371055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.371158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.371191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.371306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.371340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.371480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.371522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.371670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.823 [2024-11-18 12:06:12.371717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.823 qpair failed and we were unable to recover it. 00:37:46.823 [2024-11-18 12:06:12.371862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.371899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.372039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.372073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.372185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.372219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.372362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.372396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.372507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.372541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.372678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.372711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.372810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.372843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.372977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.373011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.373151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.373184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.373331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.373378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.373512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.373561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.373707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.373742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.373907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.373941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.374077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.374111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.374241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.374275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.374439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.374484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.374641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.374676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.374823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.374860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.374965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.374999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.375135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.375169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.375299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.375332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.375467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.375508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.375636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.375685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.375824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.375860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.375972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.376009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.376158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.376193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.376332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.376367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.376506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.376543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.376678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.376712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.376814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.376854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.377000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.377034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.377136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.377171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.377280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.377315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.377419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.377454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.377594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.377628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.377789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.377823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.377931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.377965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.378085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.378119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.378222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.378256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.378369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.378404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.378546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.824 [2024-11-18 12:06:12.378580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.824 qpair failed and we were unable to recover it. 00:37:46.824 [2024-11-18 12:06:12.378695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.378730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.378888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.378923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.379042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.379077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.379185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.379219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.379380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.379415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.379582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.379616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.379725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.379759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.379889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.379923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.380040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.380088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.380224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.380272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.380412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.380449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.380589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.380624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.380758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.380792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.380894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.380927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.381066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.381100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.381224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.381262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.381397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.381433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.381591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.381639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.381786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.381821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.381926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.381960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.382098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.382131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.382269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.382303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.382406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.382440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.382562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.382598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.382727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.382761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.382909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.382943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.383073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.383107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.383215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.383253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.383373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.383427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.383554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.383590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.383731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.383764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.383911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.383945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.384101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.384134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.384268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.384302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.384429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.384476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.384644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.384693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.384839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.384876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.385016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.385050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.385156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.385190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.825 [2024-11-18 12:06:12.385330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.825 [2024-11-18 12:06:12.385364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.825 qpair failed and we were unable to recover it. 00:37:46.826 [2024-11-18 12:06:12.385484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-11-18 12:06:12.385541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-11-18 12:06:12.385688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-11-18 12:06:12.385726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-11-18 12:06:12.385843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-11-18 12:06:12.385878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-11-18 12:06:12.385986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-11-18 12:06:12.386020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-11-18 12:06:12.386180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-11-18 12:06:12.386214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-11-18 12:06:12.386335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-11-18 12:06:12.386383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-11-18 12:06:12.386507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-11-18 12:06:12.386543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-11-18 12:06:12.386696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-11-18 12:06:12.386734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-11-18 12:06:12.386902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-11-18 12:06:12.386937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-11-18 12:06:12.387045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-11-18 12:06:12.387079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-11-18 12:06:12.387220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-11-18 12:06:12.387254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-11-18 12:06:12.387355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-11-18 12:06:12.387389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-11-18 12:06:12.387528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-11-18 12:06:12.387562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-11-18 12:06:12.387676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-11-18 12:06:12.387710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-11-18 12:06:12.387872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-11-18 12:06:12.387906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-11-18 12:06:12.388012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-11-18 12:06:12.388046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-11-18 12:06:12.388183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-11-18 12:06:12.388216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-11-18 12:06:12.388349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-11-18 12:06:12.388384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-11-18 12:06:12.388542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-11-18 12:06:12.388590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-11-18 12:06:12.388749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-11-18 12:06:12.388798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-11-18 12:06:12.388917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-11-18 12:06:12.388953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-11-18 12:06:12.389115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-11-18 12:06:12.389149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-11-18 12:06:12.389258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-11-18 12:06:12.389293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-11-18 12:06:12.389429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-11-18 12:06:12.389462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-11-18 12:06:12.389578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-11-18 12:06:12.389613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-11-18 12:06:12.389732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-11-18 12:06:12.389771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-11-18 12:06:12.389886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-11-18 12:06:12.389924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-11-18 12:06:12.390035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-11-18 12:06:12.390072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-11-18 12:06:12.390237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-11-18 12:06:12.390278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-11-18 12:06:12.390448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-11-18 12:06:12.390498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-11-18 12:06:12.390606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-11-18 12:06:12.390653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-11-18 12:06:12.390815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-11-18 12:06:12.390850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-11-18 12:06:12.390962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-11-18 12:06:12.390996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-11-18 12:06:12.391133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-11-18 12:06:12.391169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-11-18 12:06:12.391287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-11-18 12:06:12.391323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-11-18 12:06:12.391457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.826 [2024-11-18 12:06:12.391501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.826 qpair failed and we were unable to recover it. 00:37:46.826 [2024-11-18 12:06:12.391637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.391671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.391801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.391834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.391966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.392000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.392163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.392198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.392336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.392377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.392534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.392582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.392733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.392767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.392907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.392941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.393053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.393088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.393228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.393262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.393369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.393405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.393520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.393555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.393717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.393751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.393856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.393890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.394025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.394059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.394221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.394255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.394406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.394455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.394587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.394635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.394784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.394833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.394944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.394979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.395109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.395143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.395279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.395313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.395420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.395456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.395606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.395654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.395796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.395831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.395972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.396006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.396119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.396153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.396284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.396318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.396478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.396523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.396643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.396679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.396796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.396830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.396962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.396997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.397105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.397145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.397309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.397343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.397459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.397516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.397653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.397701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.397833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.397880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.397987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.398022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.398158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.398192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.398323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.398356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.398467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.827 [2024-11-18 12:06:12.398509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.827 qpair failed and we were unable to recover it. 00:37:46.827 [2024-11-18 12:06:12.398631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.398669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.398778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.398814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.398975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.399009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.399116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.399150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.399249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.399283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.399419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.399453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.399569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.399604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.399721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.399759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.399898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.399934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.400048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.400083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.400191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.400225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.400359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.400393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.400538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.400573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.400683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.400717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.400825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.400859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.400996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.401031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.401142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.401178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.401343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.401378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.401529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.401576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.401699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.401735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.401871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.401918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.402084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.402119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.402224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.402258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.402364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.402399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.402546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.402582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.402710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.402759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.402928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.402963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.403074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.403108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.403247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.403282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.403419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.403453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.403603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.403639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.403773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.403812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.403974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.404008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.404137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.404171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.404280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.404314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.404484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.404525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.404783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.404817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.404922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.404957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.405090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.405124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.405257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.405291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.828 [2024-11-18 12:06:12.405426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.828 [2024-11-18 12:06:12.405460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.828 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.405606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.405641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.405763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.405811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.405982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.406019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.406161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.406195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.406359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.406393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.406502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.406536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.406652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.406686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.406829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.406864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.407019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.407067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.407206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.407243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.407350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.407385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.407521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.407556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.407659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.407693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.407838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.407872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.408009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.408042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.408151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.408186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.408289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.408323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.408460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.408504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.408669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.408703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.408830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.408864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.408978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.409013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.409164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.409212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.409354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.409389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.409566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.409615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.409757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.409792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.409896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.409930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.410067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.410102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.410235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.410268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.410383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.410420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.410600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.410649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.410768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.410819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.410952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.410986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.411121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.411155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.411312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.411345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.411448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.411484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.411635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.411669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.411770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.411804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.411940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.411974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.412121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.412169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.412275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.412311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.829 [2024-11-18 12:06:12.412432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.829 [2024-11-18 12:06:12.412480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.829 qpair failed and we were unable to recover it. 00:37:46.830 [2024-11-18 12:06:12.412616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-11-18 12:06:12.412652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-11-18 12:06:12.412801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-11-18 12:06:12.412850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-11-18 12:06:12.412976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-11-18 12:06:12.413012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-11-18 12:06:12.413130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-11-18 12:06:12.413165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-11-18 12:06:12.413303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-11-18 12:06:12.413337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-11-18 12:06:12.413486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-11-18 12:06:12.413541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-11-18 12:06:12.413716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-11-18 12:06:12.413751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-11-18 12:06:12.413887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-11-18 12:06:12.413923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-11-18 12:06:12.414057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-11-18 12:06:12.414091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-11-18 12:06:12.414251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-11-18 12:06:12.414284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-11-18 12:06:12.414415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-11-18 12:06:12.414448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-11-18 12:06:12.414619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-11-18 12:06:12.414668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-11-18 12:06:12.414820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-11-18 12:06:12.414868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-11-18 12:06:12.415046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-11-18 12:06:12.415083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-11-18 12:06:12.415196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-11-18 12:06:12.415231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-11-18 12:06:12.415343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-11-18 12:06:12.415378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-11-18 12:06:12.415506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-11-18 12:06:12.415554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-11-18 12:06:12.415686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-11-18 12:06:12.415734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-11-18 12:06:12.415858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-11-18 12:06:12.415894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-11-18 12:06:12.416022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-11-18 12:06:12.416056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-11-18 12:06:12.416191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-11-18 12:06:12.416224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-11-18 12:06:12.416362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-11-18 12:06:12.416396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-11-18 12:06:12.416536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-11-18 12:06:12.416574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-11-18 12:06:12.416694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-11-18 12:06:12.416741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-11-18 12:06:12.416889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-11-18 12:06:12.416925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-11-18 12:06:12.417067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-11-18 12:06:12.417101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-11-18 12:06:12.417236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-11-18 12:06:12.417270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-11-18 12:06:12.417422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-11-18 12:06:12.417471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-11-18 12:06:12.417593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-11-18 12:06:12.417628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-11-18 12:06:12.417795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-11-18 12:06:12.417830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-11-18 12:06:12.417969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-11-18 12:06:12.418003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-11-18 12:06:12.418138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-11-18 12:06:12.418173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-11-18 12:06:12.418306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-11-18 12:06:12.418340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-11-18 12:06:12.418509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-11-18 12:06:12.418545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-11-18 12:06:12.418713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-11-18 12:06:12.418751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.830 qpair failed and we were unable to recover it. 00:37:46.830 [2024-11-18 12:06:12.418880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.830 [2024-11-18 12:06:12.418914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.419043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.419086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.419222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.419255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.419395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.419429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.419568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.419603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.419744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.419779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.419927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.419960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.420059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.420104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.420214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.420247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.420362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.420411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.420541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.420589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.420701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.420736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.420870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.420904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.421040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.421074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.421238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.421272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.421408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.421443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.421592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.421640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.421799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.421847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.421970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.422006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.422149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.422183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.422318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.422352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.422465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.422520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.422671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.422719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.422833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.422869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.423031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.423065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.423206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.423240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.423383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.423421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.423561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.423597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.423748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.423796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.423908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.423943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.424080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.424115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.424253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.424288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.424396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.424430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.424549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.424585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.424735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.424770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.424918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.424952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.425056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.425090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.425219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.425253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.425361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.425395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.425510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.425544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.425690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.831 [2024-11-18 12:06:12.425726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.831 qpair failed and we were unable to recover it. 00:37:46.831 [2024-11-18 12:06:12.425842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.425877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.426009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.426044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.426185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.426218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.426334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.426373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.426488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.426530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.426665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.426700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.426815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.426849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.426989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.427023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.427130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.427164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.427279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.427314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.427481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.427524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.427670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.427705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.427870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.427904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.428014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.428048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.428184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.428218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.428346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.428381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.428512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.428560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.428702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.428739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.428851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.428885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.429051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.429086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.429220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.429261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.429400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.429434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.429602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.429650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.429799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.429836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.429955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.429989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.430102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.430137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.430247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.430281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.430410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.430444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.430590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.430626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.430743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.430780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.430926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.430960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.431057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.431091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.431274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.431309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.431434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.431482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.431515] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:37:46.832 [2024-11-18 12:06:12.431642] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:46.832 [2024-11-18 12:06:12.431663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.431710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.431855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.431890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.432003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.432036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.432145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.432179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.432309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.432343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.832 [2024-11-18 12:06:12.432469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.832 [2024-11-18 12:06:12.432562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.832 qpair failed and we were unable to recover it. 00:37:46.833 [2024-11-18 12:06:12.432707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-11-18 12:06:12.432744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-11-18 12:06:12.432914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-11-18 12:06:12.432948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-11-18 12:06:12.433086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-11-18 12:06:12.433120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-11-18 12:06:12.433256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-11-18 12:06:12.433290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-11-18 12:06:12.433450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-11-18 12:06:12.433506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-11-18 12:06:12.433650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-11-18 12:06:12.433686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-11-18 12:06:12.433838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-11-18 12:06:12.433886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-11-18 12:06:12.434002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-11-18 12:06:12.434039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-11-18 12:06:12.434143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-11-18 12:06:12.434177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-11-18 12:06:12.434285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-11-18 12:06:12.434321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-11-18 12:06:12.434486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-11-18 12:06:12.434527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-11-18 12:06:12.434682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-11-18 12:06:12.434730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-11-18 12:06:12.434850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-11-18 12:06:12.434886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-11-18 12:06:12.435001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-11-18 12:06:12.435035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-11-18 12:06:12.435152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-11-18 12:06:12.435187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-11-18 12:06:12.435343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-11-18 12:06:12.435392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-11-18 12:06:12.435539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-11-18 12:06:12.435576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-11-18 12:06:12.435691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-11-18 12:06:12.435727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-11-18 12:06:12.435841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-11-18 12:06:12.435877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-11-18 12:06:12.436043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-11-18 12:06:12.436077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-11-18 12:06:12.436191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-11-18 12:06:12.436225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-11-18 12:06:12.436361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-11-18 12:06:12.436395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-11-18 12:06:12.436533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-11-18 12:06:12.436568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-11-18 12:06:12.436687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-11-18 12:06:12.436721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-11-18 12:06:12.436821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-11-18 12:06:12.436855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-11-18 12:06:12.436968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-11-18 12:06:12.437002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-11-18 12:06:12.437172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-11-18 12:06:12.437209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-11-18 12:06:12.437321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-11-18 12:06:12.437355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-11-18 12:06:12.437533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-11-18 12:06:12.437582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-11-18 12:06:12.437697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-11-18 12:06:12.437733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-11-18 12:06:12.437877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-11-18 12:06:12.437911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-11-18 12:06:12.438025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-11-18 12:06:12.438060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-11-18 12:06:12.438173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-11-18 12:06:12.438212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-11-18 12:06:12.438324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-11-18 12:06:12.438359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.833 [2024-11-18 12:06:12.438508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.833 [2024-11-18 12:06:12.438543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.833 qpair failed and we were unable to recover it. 00:37:46.834 [2024-11-18 12:06:12.438657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-11-18 12:06:12.438691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-11-18 12:06:12.438812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-11-18 12:06:12.438847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-11-18 12:06:12.438956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-11-18 12:06:12.438992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-11-18 12:06:12.439132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-11-18 12:06:12.439170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-11-18 12:06:12.439320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-11-18 12:06:12.439375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-11-18 12:06:12.439521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-11-18 12:06:12.439557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-11-18 12:06:12.439672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-11-18 12:06:12.439707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-11-18 12:06:12.439815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-11-18 12:06:12.439850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-11-18 12:06:12.439960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-11-18 12:06:12.439996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-11-18 12:06:12.440109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-11-18 12:06:12.440145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-11-18 12:06:12.440317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-11-18 12:06:12.440354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-11-18 12:06:12.440477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-11-18 12:06:12.440520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-11-18 12:06:12.440636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-11-18 12:06:12.440672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-11-18 12:06:12.440782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-11-18 12:06:12.440817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-11-18 12:06:12.440925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-11-18 12:06:12.440960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-11-18 12:06:12.441094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-11-18 12:06:12.441129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-11-18 12:06:12.441293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-11-18 12:06:12.441328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-11-18 12:06:12.441452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-11-18 12:06:12.441507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-11-18 12:06:12.441626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-11-18 12:06:12.441662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-11-18 12:06:12.441798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-11-18 12:06:12.441833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-11-18 12:06:12.441971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-11-18 12:06:12.442005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-11-18 12:06:12.442137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-11-18 12:06:12.442172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-11-18 12:06:12.442279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-11-18 12:06:12.442315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-11-18 12:06:12.442485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-11-18 12:06:12.442526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-11-18 12:06:12.442635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-11-18 12:06:12.442675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-11-18 12:06:12.442847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-11-18 12:06:12.442882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-11-18 12:06:12.443020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-11-18 12:06:12.443054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-11-18 12:06:12.443160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-11-18 12:06:12.443194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-11-18 12:06:12.443297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-11-18 12:06:12.443332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-11-18 12:06:12.443439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-11-18 12:06:12.443473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-11-18 12:06:12.443619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-11-18 12:06:12.443655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-11-18 12:06:12.443762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-11-18 12:06:12.443797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-11-18 12:06:12.443973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-11-18 12:06:12.444021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-11-18 12:06:12.444138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-11-18 12:06:12.444174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.834 [2024-11-18 12:06:12.444289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.834 [2024-11-18 12:06:12.444325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.834 qpair failed and we were unable to recover it. 00:37:46.835 [2024-11-18 12:06:12.444459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-11-18 12:06:12.444499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-11-18 12:06:12.444651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-11-18 12:06:12.444687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-11-18 12:06:12.444829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-11-18 12:06:12.444882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-11-18 12:06:12.444989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-11-18 12:06:12.445023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-11-18 12:06:12.445132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-11-18 12:06:12.445166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-11-18 12:06:12.445314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-11-18 12:06:12.445351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-11-18 12:06:12.445485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-11-18 12:06:12.445541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-11-18 12:06:12.445710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-11-18 12:06:12.445746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-11-18 12:06:12.445892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-11-18 12:06:12.445927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-11-18 12:06:12.446035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-11-18 12:06:12.446068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-11-18 12:06:12.446207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-11-18 12:06:12.446241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-11-18 12:06:12.446353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-11-18 12:06:12.446388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-11-18 12:06:12.446558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-11-18 12:06:12.446606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-11-18 12:06:12.446734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-11-18 12:06:12.446781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-11-18 12:06:12.446930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-11-18 12:06:12.446967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-11-18 12:06:12.447079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-11-18 12:06:12.447113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-11-18 12:06:12.447219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-11-18 12:06:12.447254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-11-18 12:06:12.447391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-11-18 12:06:12.447427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-11-18 12:06:12.447580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-11-18 12:06:12.447615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-11-18 12:06:12.447728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-11-18 12:06:12.447762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-11-18 12:06:12.447902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-11-18 12:06:12.447936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-11-18 12:06:12.448098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-11-18 12:06:12.448132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-11-18 12:06:12.448258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-11-18 12:06:12.448292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-11-18 12:06:12.448401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-11-18 12:06:12.448437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-11-18 12:06:12.448553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-11-18 12:06:12.448589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-11-18 12:06:12.448739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-11-18 12:06:12.448787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-11-18 12:06:12.448899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-11-18 12:06:12.448935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-11-18 12:06:12.449097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-11-18 12:06:12.449132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-11-18 12:06:12.449264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-11-18 12:06:12.449298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-11-18 12:06:12.449406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-11-18 12:06:12.449441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-11-18 12:06:12.449592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-11-18 12:06:12.449627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-11-18 12:06:12.449765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-11-18 12:06:12.449799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-11-18 12:06:12.449900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-11-18 12:06:12.449934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-11-18 12:06:12.450076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-11-18 12:06:12.450111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-11-18 12:06:12.450241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-11-18 12:06:12.450274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-11-18 12:06:12.450424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.835 [2024-11-18 12:06:12.450481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.835 qpair failed and we were unable to recover it. 00:37:46.835 [2024-11-18 12:06:12.450638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.450674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.450817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.450852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.450988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.451022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.451123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.451157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.451301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.451335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.451453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.451512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.451638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.451682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.451790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.451825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.451960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.451994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.452099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.452133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.452265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.452298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.452438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.452472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.452618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.452655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.452762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.452796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.452967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.453001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.453115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.453148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.453261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.453296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.453435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.453470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.453611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.453645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.453757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.453791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.453897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.453930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.454068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.454101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.454283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.454331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.454484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.454529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.454648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.454695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.454848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.454883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.455015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.455056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.455190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.455224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.455329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.455363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.455506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.455541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.455659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.455694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.455837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.455874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.456044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.456092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.456261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.456296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.456399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.456433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.456565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.456599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.456758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.456791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.456923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.456956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.457052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.457086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.836 [2024-11-18 12:06:12.457195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.836 [2024-11-18 12:06:12.457233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.836 qpair failed and we were unable to recover it. 00:37:46.837 [2024-11-18 12:06:12.457344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-11-18 12:06:12.457379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-11-18 12:06:12.457508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-11-18 12:06:12.457543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-11-18 12:06:12.457702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-11-18 12:06:12.457735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-11-18 12:06:12.457865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-11-18 12:06:12.457898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-11-18 12:06:12.458034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-11-18 12:06:12.458068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-11-18 12:06:12.458183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-11-18 12:06:12.458218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-11-18 12:06:12.458347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-11-18 12:06:12.458386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-11-18 12:06:12.458536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-11-18 12:06:12.458571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-11-18 12:06:12.458681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-11-18 12:06:12.458715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-11-18 12:06:12.458875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-11-18 12:06:12.458909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-11-18 12:06:12.459042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-11-18 12:06:12.459075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-11-18 12:06:12.459205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-11-18 12:06:12.459240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-11-18 12:06:12.459361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-11-18 12:06:12.459410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-11-18 12:06:12.459581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-11-18 12:06:12.459630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-11-18 12:06:12.459798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-11-18 12:06:12.459832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-11-18 12:06:12.459939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-11-18 12:06:12.459972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-11-18 12:06:12.460084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-11-18 12:06:12.460118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-11-18 12:06:12.460253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-11-18 12:06:12.460286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-11-18 12:06:12.460386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-11-18 12:06:12.460419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-11-18 12:06:12.460562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-11-18 12:06:12.460596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-11-18 12:06:12.460710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-11-18 12:06:12.460744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-11-18 12:06:12.460879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-11-18 12:06:12.460913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-11-18 12:06:12.461052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-11-18 12:06:12.461087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-11-18 12:06:12.461221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-11-18 12:06:12.461255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-11-18 12:06:12.461380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-11-18 12:06:12.461414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-11-18 12:06:12.461543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-11-18 12:06:12.461591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-11-18 12:06:12.461736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-11-18 12:06:12.461774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-11-18 12:06:12.461889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-11-18 12:06:12.461923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-11-18 12:06:12.462058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-11-18 12:06:12.462092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-11-18 12:06:12.462227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-11-18 12:06:12.462260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-11-18 12:06:12.462380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-11-18 12:06:12.462441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-11-18 12:06:12.462592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-11-18 12:06:12.462628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-11-18 12:06:12.462745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-11-18 12:06:12.462794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-11-18 12:06:12.462973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-11-18 12:06:12.463008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-11-18 12:06:12.463174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.837 [2024-11-18 12:06:12.463208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.837 qpair failed and we were unable to recover it. 00:37:46.837 [2024-11-18 12:06:12.463348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.463382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.463523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.463558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.463686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.463720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.463877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.463911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.464045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.464079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.464226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.464274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.464417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.464452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.464584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.464632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.464756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.464793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.464947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.464981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.465118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.465152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.465287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.465327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.465457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.465519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.465656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.465703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.465849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.465885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.465996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.466030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.466161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.466195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.466302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.466336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.466500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.466535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.466659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.466693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.466814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.466852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.466983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.467017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.467154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.467188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.467288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.467322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.467481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.467538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.467687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.467726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.467854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.467889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.468029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.468063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.468172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.468206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.468361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.468395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.468562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.468598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.468711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.468744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.468890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.468924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.469086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.469120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.469294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.469328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.469442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.469480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.469606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.469641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.469807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.469841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.469999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.470038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.470175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.838 [2024-11-18 12:06:12.470208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.838 qpair failed and we were unable to recover it. 00:37:46.838 [2024-11-18 12:06:12.470368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-11-18 12:06:12.470402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-11-18 12:06:12.470545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-11-18 12:06:12.470579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-11-18 12:06:12.470691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-11-18 12:06:12.470725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-11-18 12:06:12.470855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-11-18 12:06:12.470889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-11-18 12:06:12.471022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-11-18 12:06:12.471055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-11-18 12:06:12.471221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-11-18 12:06:12.471256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-11-18 12:06:12.471388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-11-18 12:06:12.471437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-11-18 12:06:12.471611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-11-18 12:06:12.471672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-11-18 12:06:12.471852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-11-18 12:06:12.471888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-11-18 12:06:12.472021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-11-18 12:06:12.472056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-11-18 12:06:12.472216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-11-18 12:06:12.472251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-11-18 12:06:12.472414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-11-18 12:06:12.472449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-11-18 12:06:12.472597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-11-18 12:06:12.472646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-11-18 12:06:12.472815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-11-18 12:06:12.472863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-11-18 12:06:12.473023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-11-18 12:06:12.473075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-11-18 12:06:12.473225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-11-18 12:06:12.473260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-11-18 12:06:12.473372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-11-18 12:06:12.473408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-11-18 12:06:12.473575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-11-18 12:06:12.473610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-11-18 12:06:12.473766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-11-18 12:06:12.473820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-11-18 12:06:12.473942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-11-18 12:06:12.473977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-11-18 12:06:12.474134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-11-18 12:06:12.474170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-11-18 12:06:12.474305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-11-18 12:06:12.474339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-11-18 12:06:12.474510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-11-18 12:06:12.474558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-11-18 12:06:12.474687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-11-18 12:06:12.474723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-11-18 12:06:12.474841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-11-18 12:06:12.474875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-11-18 12:06:12.475010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-11-18 12:06:12.475044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-11-18 12:06:12.475148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-11-18 12:06:12.475182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-11-18 12:06:12.475315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-11-18 12:06:12.475348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-11-18 12:06:12.475468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-11-18 12:06:12.475536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-11-18 12:06:12.475667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-11-18 12:06:12.475715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-11-18 12:06:12.475866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-11-18 12:06:12.475903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-11-18 12:06:12.476046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-11-18 12:06:12.476081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-11-18 12:06:12.476189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-11-18 12:06:12.476223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-11-18 12:06:12.476344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.839 [2024-11-18 12:06:12.476393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.839 qpair failed and we were unable to recover it. 00:37:46.839 [2024-11-18 12:06:12.476524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.476560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-11-18 12:06:12.476699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.476737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-11-18 12:06:12.476884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.476918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-11-18 12:06:12.477085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.477119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-11-18 12:06:12.477260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.477300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-11-18 12:06:12.477428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.477484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-11-18 12:06:12.477656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.477704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-11-18 12:06:12.477865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.477902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-11-18 12:06:12.478005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.478040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-11-18 12:06:12.478139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.478173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-11-18 12:06:12.478316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.478350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-11-18 12:06:12.478485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.478526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-11-18 12:06:12.478626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.478660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-11-18 12:06:12.478784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.478832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-11-18 12:06:12.479019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.479055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-11-18 12:06:12.479189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.479222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-11-18 12:06:12.479356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.479389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-11-18 12:06:12.479512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.479547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-11-18 12:06:12.479659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.479693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-11-18 12:06:12.479834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.479867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-11-18 12:06:12.479977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.480011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-11-18 12:06:12.480155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.480194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-11-18 12:06:12.480310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.480345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-11-18 12:06:12.480503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.480552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-11-18 12:06:12.480671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.480706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-11-18 12:06:12.480812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.480846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-11-18 12:06:12.480995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.481029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-11-18 12:06:12.481140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.481174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-11-18 12:06:12.481282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.481315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-11-18 12:06:12.481501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.481550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-11-18 12:06:12.481664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.481700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-11-18 12:06:12.481843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.481880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-11-18 12:06:12.482043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.482078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-11-18 12:06:12.482180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.482213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-11-18 12:06:12.482350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.482383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-11-18 12:06:12.482508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.482542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-11-18 12:06:12.482679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.482712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.840 [2024-11-18 12:06:12.482825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.840 [2024-11-18 12:06:12.482858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.840 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.482989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.483022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.483162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.483195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.483304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.483341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.483514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.483551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.483684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.483719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.483858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.483893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.484035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.484073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.484212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.484246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.484355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.484388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.484544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.484592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.484729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.484765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.484881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.484916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.485075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.485109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.485244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.485279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.485381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.485415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.485560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.485594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.485737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.485785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.485928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.485964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.486095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.486130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.486290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.486323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.486457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.486518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.486675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.486712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.486856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.486890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.487000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.487033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.487173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.487208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.487348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.487384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.487538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.487575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.487722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.487761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.487903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.487939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.488052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.488085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.488202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.488238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.488353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.488388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.488549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.488584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.488699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.488734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.488879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.488915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.489057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.489092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.489216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.489263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.489404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.489439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.489549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.489584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.489695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.489729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.841 qpair failed and we were unable to recover it. 00:37:46.841 [2024-11-18 12:06:12.489862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.841 [2024-11-18 12:06:12.489896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.490004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.490038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.490179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.490215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.490342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.490390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.490577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.490625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.490748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.490782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.490902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.490942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.491114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.491149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.491283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.491318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.491432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.491468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.491580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.491614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.491724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.491758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.491874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.491908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.492026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.492074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.492221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.492257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.492370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.492405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.492538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.492573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.492709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.492743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.492877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.492911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.493052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.493088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.493210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.493244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.493352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.493387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.493498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.493533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.493667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.493700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.493801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.493835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.493976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.494011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.494144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.494178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.494319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.494356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.494463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.494512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.494682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.494730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.494861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.494897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.495038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.495084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.495195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.495229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.495376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.495412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.495553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.495589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.495698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.495732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.495900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.495935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.496041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.496076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.496184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.496219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.496382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.496416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.496548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.842 [2024-11-18 12:06:12.496583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.842 qpair failed and we were unable to recover it. 00:37:46.842 [2024-11-18 12:06:12.496719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-11-18 12:06:12.496753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-11-18 12:06:12.496885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-11-18 12:06:12.496920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-11-18 12:06:12.497036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-11-18 12:06:12.497070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-11-18 12:06:12.497181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-11-18 12:06:12.497218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-11-18 12:06:12.497357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-11-18 12:06:12.497391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-11-18 12:06:12.497530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-11-18 12:06:12.497573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-11-18 12:06:12.497707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-11-18 12:06:12.497742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-11-18 12:06:12.497874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-11-18 12:06:12.497909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-11-18 12:06:12.498042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-11-18 12:06:12.498077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-11-18 12:06:12.498213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-11-18 12:06:12.498249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-11-18 12:06:12.498377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-11-18 12:06:12.498425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-11-18 12:06:12.498561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-11-18 12:06:12.498609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-11-18 12:06:12.498756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-11-18 12:06:12.498793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-11-18 12:06:12.498935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-11-18 12:06:12.498969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-11-18 12:06:12.499106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-11-18 12:06:12.499141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-11-18 12:06:12.499280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-11-18 12:06:12.499314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-11-18 12:06:12.499427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-11-18 12:06:12.499469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-11-18 12:06:12.499624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-11-18 12:06:12.499672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-11-18 12:06:12.499813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-11-18 12:06:12.499851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-11-18 12:06:12.499966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-11-18 12:06:12.500001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-11-18 12:06:12.500136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-11-18 12:06:12.500171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-11-18 12:06:12.500296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-11-18 12:06:12.500330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-11-18 12:06:12.500461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-11-18 12:06:12.500501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-11-18 12:06:12.500610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-11-18 12:06:12.500645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-11-18 12:06:12.500803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-11-18 12:06:12.500840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-11-18 12:06:12.500977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-11-18 12:06:12.501011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-11-18 12:06:12.501156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-11-18 12:06:12.501191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-11-18 12:06:12.501330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-11-18 12:06:12.501363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-11-18 12:06:12.501475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-11-18 12:06:12.501517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-11-18 12:06:12.501653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-11-18 12:06:12.501695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-11-18 12:06:12.501865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-11-18 12:06:12.501901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-11-18 12:06:12.502034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-11-18 12:06:12.502068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-11-18 12:06:12.502200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-11-18 12:06:12.502235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-11-18 12:06:12.502370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-11-18 12:06:12.502405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-11-18 12:06:12.502590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.843 [2024-11-18 12:06:12.502638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.843 qpair failed and we were unable to recover it. 00:37:46.843 [2024-11-18 12:06:12.502762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.502817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.502934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.502968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.503106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.503140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.503275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.503309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.503440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.503474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.503610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.503659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.503808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.503844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.504015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.504053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.504189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.504223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.504361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.504399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.504539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.504580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.504717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.504752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.504888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.504922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.505084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.505119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.505256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.505289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.505425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.505473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.505618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.505666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.505788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.505824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.505984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.506017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.506152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.506185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.506289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.506322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.506485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.506524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.506632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.506669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.506781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.506828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.506952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.506988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.507126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.507160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.507269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.507304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.507445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.507479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.507613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.507647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.507784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.507818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.507922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.507956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.508060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.508093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.508270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.508318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.508458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.508503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.508636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.508671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.508802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.508835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.508998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.509032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.509177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.509213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.509316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.509350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.509513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.509547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.844 [2024-11-18 12:06:12.509687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.844 [2024-11-18 12:06:12.509721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.844 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.509826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.509860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.509971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.510004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.510171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.510206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.510343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.510389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.510525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.510559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.510672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.510705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.510870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.510918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.511072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.511120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.511274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.511310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.511422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.511463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.511622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.511659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.511797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.511832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.511968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.512003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.512145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.512178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.512299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.512347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.512517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.512553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.512660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.512693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.512830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.512864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.512997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.513030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.513140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.513176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.513290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.513325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.513447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.513484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.513599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.513634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.513784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.513818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.513936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.513971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.514105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.514139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.514300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.514335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.514458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.514500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.514642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.514677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.514823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.514856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.514968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.515003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.515112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.515146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.515257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.515292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.515446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.515505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.515659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.515705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.515852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.515887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.516006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.516040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.516171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.516205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.516316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.516352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.845 qpair failed and we were unable to recover it. 00:37:46.845 [2024-11-18 12:06:12.516497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.845 [2024-11-18 12:06:12.516531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.516634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.516667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.516763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.516803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.516937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.516971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.517109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.517149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.517267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.517303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.517477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.517534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.517673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.517709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.517874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.517909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.518012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.518047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.518217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.518257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.518374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.518407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.518528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.518564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.518674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.518709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.518859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.518893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.519034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.519068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.519198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.519232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.519372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.519405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.519540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.519574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.519702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.519736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.519898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.519931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.520033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.520067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.520202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.520236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.520372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.520405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.520545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.520594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.520730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.520779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.520896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.520932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.521076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.521111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.521250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.521284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.521442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.521476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.521601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.521636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.521778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.521816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.521949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.521996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.522140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.522176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.522284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.522319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.522426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.522460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.522605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.522642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.522782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.522817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.522963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.523000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.523152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.523186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.523321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.846 [2024-11-18 12:06:12.523355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.846 qpair failed and we were unable to recover it. 00:37:46.846 [2024-11-18 12:06:12.523486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-11-18 12:06:12.523530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-11-18 12:06:12.523645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-11-18 12:06:12.523678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-11-18 12:06:12.523800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-11-18 12:06:12.523848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-11-18 12:06:12.523963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-11-18 12:06:12.523998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-11-18 12:06:12.524130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-11-18 12:06:12.524164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-11-18 12:06:12.524328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-11-18 12:06:12.524361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-11-18 12:06:12.524469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-11-18 12:06:12.524511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-11-18 12:06:12.524630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-11-18 12:06:12.524664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-11-18 12:06:12.524813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-11-18 12:06:12.524848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-11-18 12:06:12.524964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-11-18 12:06:12.525007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-11-18 12:06:12.525179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-11-18 12:06:12.525214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-11-18 12:06:12.525319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-11-18 12:06:12.525353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-11-18 12:06:12.525508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-11-18 12:06:12.525556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-11-18 12:06:12.525677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-11-18 12:06:12.525713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-11-18 12:06:12.525821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-11-18 12:06:12.525856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-11-18 12:06:12.525997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-11-18 12:06:12.526032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-11-18 12:06:12.526192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-11-18 12:06:12.526226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-11-18 12:06:12.526329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-11-18 12:06:12.526364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-11-18 12:06:12.526516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-11-18 12:06:12.526560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-11-18 12:06:12.526675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-11-18 12:06:12.526708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-11-18 12:06:12.526852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-11-18 12:06:12.526885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-11-18 12:06:12.527006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-11-18 12:06:12.527041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-11-18 12:06:12.527151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-11-18 12:06:12.527185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-11-18 12:06:12.527319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-11-18 12:06:12.527353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-11-18 12:06:12.527483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-11-18 12:06:12.527539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-11-18 12:06:12.527682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-11-18 12:06:12.527718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-11-18 12:06:12.527823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-11-18 12:06:12.527864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-11-18 12:06:12.527974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-11-18 12:06:12.528010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-11-18 12:06:12.528164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-11-18 12:06:12.528212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-11-18 12:06:12.528368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-11-18 12:06:12.528416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-11-18 12:06:12.528548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-11-18 12:06:12.528584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-11-18 12:06:12.528695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-11-18 12:06:12.528729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-11-18 12:06:12.528863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-11-18 12:06:12.528896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-11-18 12:06:12.529047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-11-18 12:06:12.529081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-11-18 12:06:12.529182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-11-18 12:06:12.529216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-11-18 12:06:12.529367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-11-18 12:06:12.529415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-11-18 12:06:12.529557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.847 [2024-11-18 12:06:12.529604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.847 qpair failed and we were unable to recover it. 00:37:46.847 [2024-11-18 12:06:12.529716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.529752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.529867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.529901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.530061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.530095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.530206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.530240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.530391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.530439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.530564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.530602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.530723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.530758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.530875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.530911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.531041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.531075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.531193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.531241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.531387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.531423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.531599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.531648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.531767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.531808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.531929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.531962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.532122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.532157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.532270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.532305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.532449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.532507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.532644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.532681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.532844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.532878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.533014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.533048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.533158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.533192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.533303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.533339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.533499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.533548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.533700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.533736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.533885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.533918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.534046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.534079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.534220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.534254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.534367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.534402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.534541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.534576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.534681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.534714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.534829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.534862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.534980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.535014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.535165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.535212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.535331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.535366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.535526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.535592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.535734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.535779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.535922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.535955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.536060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.536094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.848 [2024-11-18 12:06:12.536254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.848 [2024-11-18 12:06:12.536288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.848 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.536417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.536466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.536622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.536657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.536806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.536844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.536953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.536988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.537166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.537201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.537333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.537380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.537512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.537560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.537687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.537726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.537870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.537905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.538029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.538063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.538199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.538234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.538333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.538367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.538485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.538528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.538648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.538696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.538856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.538903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.539058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.539093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.539203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.539238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.539368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.539402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.539516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.539551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.539658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.539692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.539866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.539901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.540008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.540042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.540178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.540211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.540319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.540354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.540495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.540530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.540657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.540691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.540826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.540862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.541029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.541063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.541166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.541201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.541316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.541350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.541515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.541549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.541673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.541722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.541845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.541880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.542014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.542048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.542183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.542216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.542380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.542414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.542545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.542594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.542715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.542751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.542913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.542948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.543055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.543089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.849 qpair failed and we were unable to recover it. 00:37:46.849 [2024-11-18 12:06:12.543193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.849 [2024-11-18 12:06:12.543228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.543369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.543404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.543527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.543562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.543672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.543706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.543814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.543848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.543980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.544014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.544164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.544212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.544346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.544395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.544570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.544606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.544744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.544778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.544914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.544948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.545080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.545113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.545216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.545250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.545387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.545425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.545534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.545569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.545682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.545717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.545873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.545906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.546041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.546074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.546209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.546244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.546376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.546411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.546561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.546596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.546731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.546765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.546897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.546932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.547062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.547096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.547215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.547250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.547384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.547418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.547575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.547610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.547758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.547797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.547936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.547970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.548078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.548111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.548245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.548280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.548391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.548425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.548531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.548566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.548705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.548739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.548900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.548934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.549039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.549073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.549211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.549246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.549366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.549414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.549575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.549623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.549744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.549780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.549890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.850 [2024-11-18 12:06:12.549924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.850 qpair failed and we were unable to recover it. 00:37:46.850 [2024-11-18 12:06:12.550052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.550086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.550198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.550231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.550333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.550366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.550511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.550546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.550678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.550723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.550859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.550893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.551004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.551038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.551171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.551205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.551317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.551351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.551485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.551540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.551699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.551747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.551879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.551927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.552071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.552110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.552245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.552279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.552416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.552449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.552567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.552601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.552754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.552802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.552941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.552977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.553108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.553143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.553286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.553320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.553479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.553537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.553694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.553742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.553896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.553934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.554076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.554122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.554256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.554291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.554468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.554527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.554652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.554688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.554829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.554878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.555021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.555056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.555170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.555205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.555344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.555378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.555485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.555525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.555657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.555691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.555796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.555829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.555928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.555961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.556136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.556184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.556303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.556338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.556513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.556551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.556661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.556696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.556818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.556855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.851 [2024-11-18 12:06:12.556994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.851 [2024-11-18 12:06:12.557028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.851 qpair failed and we were unable to recover it. 00:37:46.852 [2024-11-18 12:06:12.557159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-11-18 12:06:12.557194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-11-18 12:06:12.557299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-11-18 12:06:12.557332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-11-18 12:06:12.557468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-11-18 12:06:12.557511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-11-18 12:06:12.557621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-11-18 12:06:12.557655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-11-18 12:06:12.557790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-11-18 12:06:12.557824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-11-18 12:06:12.557930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-11-18 12:06:12.557964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-11-18 12:06:12.558098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-11-18 12:06:12.558132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-11-18 12:06:12.558247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-11-18 12:06:12.558281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-11-18 12:06:12.558414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-11-18 12:06:12.558449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-11-18 12:06:12.558584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-11-18 12:06:12.558631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-11-18 12:06:12.558778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-11-18 12:06:12.558815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-11-18 12:06:12.558935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-11-18 12:06:12.558970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-11-18 12:06:12.559112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-11-18 12:06:12.559146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-11-18 12:06:12.559282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-11-18 12:06:12.559317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-11-18 12:06:12.559430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-11-18 12:06:12.559465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-11-18 12:06:12.559585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-11-18 12:06:12.559620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-11-18 12:06:12.559753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-11-18 12:06:12.559793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-11-18 12:06:12.559892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-11-18 12:06:12.559926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-11-18 12:06:12.560066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-11-18 12:06:12.560100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-11-18 12:06:12.560236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-11-18 12:06:12.560270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-11-18 12:06:12.560437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-11-18 12:06:12.560471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-11-18 12:06:12.560640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-11-18 12:06:12.560687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-11-18 12:06:12.560856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-11-18 12:06:12.560904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-11-18 12:06:12.561062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-11-18 12:06:12.561099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-11-18 12:06:12.561235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-11-18 12:06:12.561268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-11-18 12:06:12.561384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-11-18 12:06:12.561418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-11-18 12:06:12.561535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-11-18 12:06:12.561569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-11-18 12:06:12.561670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-11-18 12:06:12.561704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-11-18 12:06:12.561856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-11-18 12:06:12.561894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-11-18 12:06:12.562024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-11-18 12:06:12.562058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-11-18 12:06:12.562176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-11-18 12:06:12.562212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-11-18 12:06:12.562326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-11-18 12:06:12.562361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-11-18 12:06:12.562518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-11-18 12:06:12.562566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-11-18 12:06:12.562688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-11-18 12:06:12.562722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-11-18 12:06:12.562832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-11-18 12:06:12.562866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.852 [2024-11-18 12:06:12.562998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.852 [2024-11-18 12:06:12.563031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.852 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.563145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.563180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.563335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.563383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.563505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.563545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.563658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.563697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.563857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.563892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.563996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.564029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.564189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.564223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.564327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.564361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.564484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.564541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.564687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.564722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.564863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.564897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.565027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.565061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.565159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.565193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.565337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.565386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.565528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.565563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.565689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.565737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.565888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.565923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.566036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.566069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.566231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.566264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.566399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.566433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.566593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.566642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.566758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.566793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.566900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.566934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.567071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.567105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.567218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.567251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.567404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.567452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.567598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.567633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.567740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.567775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.567910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.567944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.568084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.568117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.568259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.568299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.568440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.568482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.568659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.568707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.568833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.568868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.569015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.569050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.569188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.569222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.569353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.569387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.569518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.569566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.569678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.569713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.569849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.569883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.853 qpair failed and we were unable to recover it. 00:37:46.853 [2024-11-18 12:06:12.570010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.853 [2024-11-18 12:06:12.570044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.570146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.570180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.570301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.570345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.570456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.570499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.570624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.570673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.570812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.570846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.570985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.571019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.571159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.571193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.571297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.571330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.571433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.571469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.571583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.571618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.571730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.571764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.571892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.571925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.572029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.572063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.572173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.572207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.572311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.572346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.572462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.572508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.572622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.572661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.572779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.572815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.572932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.572981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.573132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.573168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.573307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.573341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.573448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.573481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.573634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.573667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.573799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.573832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.573943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.573976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.574076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.574109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.574266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.574300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.574409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.574443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.574605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.574652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.574769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.574805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.574936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.574970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.575076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.575111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.575243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.575277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.575385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.575419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.575525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.575560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.575664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.575697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.575805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.575838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.575968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.576002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.576126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.576160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.576300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.854 [2024-11-18 12:06:12.576333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.854 qpair failed and we were unable to recover it. 00:37:46.854 [2024-11-18 12:06:12.576446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.576482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.576597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.576637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.576816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.576864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.576977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.577022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.577159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.577192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.577287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.577320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.577450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.577483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.577631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.577664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.577769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.577802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.577940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.577973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.578068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.578101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.578213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.578248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.578420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.578460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.578604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.578652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.578793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.578827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.578941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.578974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.579079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.579115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.579246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.579280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.579417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.579452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.579611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.579660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.579781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.579819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.579956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.579991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.580102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.580137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.580231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.580265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.580365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.580399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.580526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.580562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.580668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.580703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.580811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.580845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.580956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.580989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.581105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.581138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.581276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.581313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.581467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.581523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.581634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.581671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.581776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.581811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.581961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.581995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.582100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.582134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.582276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.582311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.582424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.582461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.582587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.582635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.582749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.582784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.582899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.855 [2024-11-18 12:06:12.582934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.855 qpair failed and we were unable to recover it. 00:37:46.855 [2024-11-18 12:06:12.583061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-11-18 12:06:12.583100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-11-18 12:06:12.583230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-11-18 12:06:12.583264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-11-18 12:06:12.583366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-11-18 12:06:12.583400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-11-18 12:06:12.583507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-11-18 12:06:12.583542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-11-18 12:06:12.583657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-11-18 12:06:12.583695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-11-18 12:06:12.583847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-11-18 12:06:12.583895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-11-18 12:06:12.584015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-11-18 12:06:12.584050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-11-18 12:06:12.584159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-11-18 12:06:12.584195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-11-18 12:06:12.584301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-11-18 12:06:12.584336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-11-18 12:06:12.584469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-11-18 12:06:12.584511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-11-18 12:06:12.584616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-11-18 12:06:12.584650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-11-18 12:06:12.584752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-11-18 12:06:12.584796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-11-18 12:06:12.584958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-11-18 12:06:12.584992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-11-18 12:06:12.585106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-11-18 12:06:12.585142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-11-18 12:06:12.585251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-11-18 12:06:12.585286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-11-18 12:06:12.585417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-11-18 12:06:12.585452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-11-18 12:06:12.585579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-11-18 12:06:12.585615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-11-18 12:06:12.585750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-11-18 12:06:12.585784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-11-18 12:06:12.585915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-11-18 12:06:12.585949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-11-18 12:06:12.586081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-11-18 12:06:12.586114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-11-18 12:06:12.586221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-11-18 12:06:12.586257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-11-18 12:06:12.586361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-11-18 12:06:12.586395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-11-18 12:06:12.586528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-11-18 12:06:12.586576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-11-18 12:06:12.586687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-11-18 12:06:12.586722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-11-18 12:06:12.586833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-11-18 12:06:12.586869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-11-18 12:06:12.586978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-11-18 12:06:12.587012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-11-18 12:06:12.587178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-11-18 12:06:12.587213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-11-18 12:06:12.587325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-11-18 12:06:12.587360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-11-18 12:06:12.587528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-11-18 12:06:12.587563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-11-18 12:06:12.587682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-11-18 12:06:12.587730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-11-18 12:06:12.587834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-11-18 12:06:12.587870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-11-18 12:06:12.587975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-11-18 12:06:12.588009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-11-18 12:06:12.588169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-11-18 12:06:12.588203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-11-18 12:06:12.588314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-11-18 12:06:12.588362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-11-18 12:06:12.588486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.856 [2024-11-18 12:06:12.588541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.856 qpair failed and we were unable to recover it. 00:37:46.856 [2024-11-18 12:06:12.588690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.588725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.588872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.588913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.589045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.589079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.589180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.589214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.589352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.589386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.589538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.589593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.589731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.589769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.589885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.589933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.590049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.590084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.590191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.590226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.590334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.590369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.590500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.590537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.590665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.590699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.590844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.590878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.591009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.591043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.591178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.591211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.591311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.591347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.591465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.591511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.591626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.591661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.591777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.591812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.591921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.591954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.592132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.592181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.592260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:46.857 [2024-11-18 12:06:12.592320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.592353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.592521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.592556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.592670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.592704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.592819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.592854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.592987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.593022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.593158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.593193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.593299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.593334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.593430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.593464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.593573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.593606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.593724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.593758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.593915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.593949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.594054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.594088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.594218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.594251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.594390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.594424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.594572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.594609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.594751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.594789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.594953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.594988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.595130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.595165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.857 [2024-11-18 12:06:12.595272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.857 [2024-11-18 12:06:12.595306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.857 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.595460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.595517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.595647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.595683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.595813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.595848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.595982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.596016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.596156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.596191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.596305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.596339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.596442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.596479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.596599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.596633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.596770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.596808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.596944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.596978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.597084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.597117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.597238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.597286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.597446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.597505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.597626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.597664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.597798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.597832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.597951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.597985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.598125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.598158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.598297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.598336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.598443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.598480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.598602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.598637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.598776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.598809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.598924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.598958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.599095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.599129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.599237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.599271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.599401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.599435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.599558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.599606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.599750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.599798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.599963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.599999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.600135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.600170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.600311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.600345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.600450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.600484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.600639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.600680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.600817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.600854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.600996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.601031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.601141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.601175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.601339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.601373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.601514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.601550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.601659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.601705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.601824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.601859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.601993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.602027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.858 qpair failed and we were unable to recover it. 00:37:46.858 [2024-11-18 12:06:12.602134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.858 [2024-11-18 12:06:12.602171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.602312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.602347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.602468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.602524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.602664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.602699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.602830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.602865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.603024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.603058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.603167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.603201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.603319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.603352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.603516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.603564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.603681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.603718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.603830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.603866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.603995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.604030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.604142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.604176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.604340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.604377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.604480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.604520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.604629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.604667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.604826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.604861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.604974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.605016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.605159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.605194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.605310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.605346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.605459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.605504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.605640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.605675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.605784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.605819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.605935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.605969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.606076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.606110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.606237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.606273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.606397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.606432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.606577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.606625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.606745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.606781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.606926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.606961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.607099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.607133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.607271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.607306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.607447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.607483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.607622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.607670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.607784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.607819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.607960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.607995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.608134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.608168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.608272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.608305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.608417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.608453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.608646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.608695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.608832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.859 [2024-11-18 12:06:12.608871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.859 qpair failed and we were unable to recover it. 00:37:46.859 [2024-11-18 12:06:12.609050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-11-18 12:06:12.609086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-11-18 12:06:12.609249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-11-18 12:06:12.609315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-11-18 12:06:12.609431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-11-18 12:06:12.609465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-11-18 12:06:12.609600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-11-18 12:06:12.609649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-11-18 12:06:12.609825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-11-18 12:06:12.609861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-11-18 12:06:12.609971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-11-18 12:06:12.610005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-11-18 12:06:12.610126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-11-18 12:06:12.610161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-11-18 12:06:12.610322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-11-18 12:06:12.610356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-11-18 12:06:12.610459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-11-18 12:06:12.610500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-11-18 12:06:12.610657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-11-18 12:06:12.610705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-11-18 12:06:12.610851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-11-18 12:06:12.610887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-11-18 12:06:12.610995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-11-18 12:06:12.611029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-11-18 12:06:12.611163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-11-18 12:06:12.611197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-11-18 12:06:12.611348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-11-18 12:06:12.611397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-11-18 12:06:12.611560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-11-18 12:06:12.611608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-11-18 12:06:12.611732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-11-18 12:06:12.611768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-11-18 12:06:12.611905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-11-18 12:06:12.611948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-11-18 12:06:12.612081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-11-18 12:06:12.612114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-11-18 12:06:12.612233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-11-18 12:06:12.612267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-11-18 12:06:12.612430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-11-18 12:06:12.612465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-11-18 12:06:12.612644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-11-18 12:06:12.612693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-11-18 12:06:12.612867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-11-18 12:06:12.612916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-11-18 12:06:12.613038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-11-18 12:06:12.613074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-11-18 12:06:12.613244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-11-18 12:06:12.613279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-11-18 12:06:12.613407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-11-18 12:06:12.613443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-11-18 12:06:12.613613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-11-18 12:06:12.613662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-11-18 12:06:12.613810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-11-18 12:06:12.613859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-11-18 12:06:12.613978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-11-18 12:06:12.614012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-11-18 12:06:12.614144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-11-18 12:06:12.614178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-11-18 12:06:12.614308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-11-18 12:06:12.614342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-11-18 12:06:12.614523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-11-18 12:06:12.614572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-11-18 12:06:12.614715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-11-18 12:06:12.614763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-11-18 12:06:12.614883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-11-18 12:06:12.614919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-11-18 12:06:12.615032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-11-18 12:06:12.615067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-11-18 12:06:12.615200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-11-18 12:06:12.615234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-11-18 12:06:12.615363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.860 [2024-11-18 12:06:12.615411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.860 qpair failed and we were unable to recover it. 00:37:46.860 [2024-11-18 12:06:12.615531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.615568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.615688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.615722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.615888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.615923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.616025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.616059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.616189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.616223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.616362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.616398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.616520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.616556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.616737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.616796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.616919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.616955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.617102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.617150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.617270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.617306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.617436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.617484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.617624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.617661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.617799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.617833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.617985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.618019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.618130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.618164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.618295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.618328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.618450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.618506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.618623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.618660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.618799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.618834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.618972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.619012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.619159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.619208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.619357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.619395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.619527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.619563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.619671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.619704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.619823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.619856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.619993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.620027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.620129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.620162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.620274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.620307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.620433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.620467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.620611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.620660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.620778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.620818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.620930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.620965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.621080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.621115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.621256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.621291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.621441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.621488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.621658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.621694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.621830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.621863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.621997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.622031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.622131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.622164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.861 [2024-11-18 12:06:12.622270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.861 [2024-11-18 12:06:12.622304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.861 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.622439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.622473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.622612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.622660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.622787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.622826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.623013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.623062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.623176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.623210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.623325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.623359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.623488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.623531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.623702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.623737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.623883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.623916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.624029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.624064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.624169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.624207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.624349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.624384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.624555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.624591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.624720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.624754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.624877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.624911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.625014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.625048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.625188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.625233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.625344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.625378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.625514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.625556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.625697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.625738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.625862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.625895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.626025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.626060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.626201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.626237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.626346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.626380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.626523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.626560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.626671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.626706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.626889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.626937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.627080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.627115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.627253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.627288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.627428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.627462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.627621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.627655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.627756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.627799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.627895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.627930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.628070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.628104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.628205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.628239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.628391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.628440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.628599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.628637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.628753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.628793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.628943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.628977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.629088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.862 [2024-11-18 12:06:12.629122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.862 qpair failed and we were unable to recover it. 00:37:46.862 [2024-11-18 12:06:12.629225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.629261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.629369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.629403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.629522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.629558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.629690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.629725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.629834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.629867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.630003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.630037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.630173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.630210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.630354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.630388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.630534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.630582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.630733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.630769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.630912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.630960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.631072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.631107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.631244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.631278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.631381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.631414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.631562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.631597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.631700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.631734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.631856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.631890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.632029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.632062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.632167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.632201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.632305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.632339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.632486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.632531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.632648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.632687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.632821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.632879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.633024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.633059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.633167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.633201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.633308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.633342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.633481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.633526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.633630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.633663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.633807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.633847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.634027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.634065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.634182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.634217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.634359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.634393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.634516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.634552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.634675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.634709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.634860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.634894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.635057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.635090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.635201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.635237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.635387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.635424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.635552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.635588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.635718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.635765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.863 [2024-11-18 12:06:12.635921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.863 [2024-11-18 12:06:12.635956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.863 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.636057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.636092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.636225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.636258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.636365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.636400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.636560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.636595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.636706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.636740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.636871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.636911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.637041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.637075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.637187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.637221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.637323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.637356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.637464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.637516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.637619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.637654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.637814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.637847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.637954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.637988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.638116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.638149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.638261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.638296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.638458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.638508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.638669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.638704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.638833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.638881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.639009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.639058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.639209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.639246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.639381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.639416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.639523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.639559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.639700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.639736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.639884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.639920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.640032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.640066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.640221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.640255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.640416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.640450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.640648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.640697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.640841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.640884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.641021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.641056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.641197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.641231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.641329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.641363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.641525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.641573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.641709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.641757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.641904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.641940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.642059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.642094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.642194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.642228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.642321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.642355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.642519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.642556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.642688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.642736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.864 qpair failed and we were unable to recover it. 00:37:46.864 [2024-11-18 12:06:12.642865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.864 [2024-11-18 12:06:12.642903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.865 qpair failed and we were unable to recover it. 00:37:46.865 [2024-11-18 12:06:12.643037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.865 [2024-11-18 12:06:12.643071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.865 qpair failed and we were unable to recover it. 00:37:46.865 [2024-11-18 12:06:12.643205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.865 [2024-11-18 12:06:12.643241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.865 qpair failed and we were unable to recover it. 00:37:46.865 [2024-11-18 12:06:12.643358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.865 [2024-11-18 12:06:12.643393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.865 qpair failed and we were unable to recover it. 00:37:46.865 [2024-11-18 12:06:12.643548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.865 [2024-11-18 12:06:12.643585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.865 qpair failed and we were unable to recover it. 00:37:46.865 [2024-11-18 12:06:12.643701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.865 [2024-11-18 12:06:12.643741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.865 qpair failed and we were unable to recover it. 00:37:46.865 [2024-11-18 12:06:12.643859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.865 [2024-11-18 12:06:12.643893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.865 qpair failed and we were unable to recover it. 00:37:46.865 [2024-11-18 12:06:12.643998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.865 [2024-11-18 12:06:12.644031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.865 qpair failed and we were unable to recover it. 00:37:46.865 [2024-11-18 12:06:12.644160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.865 [2024-11-18 12:06:12.644194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.865 qpair failed and we were unable to recover it. 00:37:46.865 [2024-11-18 12:06:12.644323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.865 [2024-11-18 12:06:12.644357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.865 qpair failed and we were unable to recover it. 00:37:46.865 [2024-11-18 12:06:12.644516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.865 [2024-11-18 12:06:12.644551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.865 qpair failed and we were unable to recover it. 00:37:46.865 [2024-11-18 12:06:12.644685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.865 [2024-11-18 12:06:12.644721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.865 qpair failed and we were unable to recover it. 00:37:46.865 [2024-11-18 12:06:12.644900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.865 [2024-11-18 12:06:12.644935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.865 qpair failed and we were unable to recover it. 00:37:46.865 [2024-11-18 12:06:12.645043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.865 [2024-11-18 12:06:12.645075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.865 qpair failed and we were unable to recover it. 00:37:46.865 [2024-11-18 12:06:12.645237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.865 [2024-11-18 12:06:12.645271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.865 qpair failed and we were unable to recover it. 00:37:46.865 [2024-11-18 12:06:12.645422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.865 [2024-11-18 12:06:12.645471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.865 qpair failed and we were unable to recover it. 00:37:46.865 [2024-11-18 12:06:12.645626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.865 [2024-11-18 12:06:12.645674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.865 qpair failed and we were unable to recover it. 00:37:46.865 [2024-11-18 12:06:12.645790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.865 [2024-11-18 12:06:12.645825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.865 qpair failed and we were unable to recover it. 00:37:46.865 [2024-11-18 12:06:12.645924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.865 [2024-11-18 12:06:12.645958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.865 qpair failed and we were unable to recover it. 00:37:46.865 [2024-11-18 12:06:12.646121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.865 [2024-11-18 12:06:12.646154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.865 qpair failed and we were unable to recover it. 00:37:46.865 [2024-11-18 12:06:12.646315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.865 [2024-11-18 12:06:12.646348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.865 qpair failed and we were unable to recover it. 00:37:46.865 [2024-11-18 12:06:12.646483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.865 [2024-11-18 12:06:12.646540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.865 qpair failed and we were unable to recover it. 00:37:46.865 [2024-11-18 12:06:12.646666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.865 [2024-11-18 12:06:12.646714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.865 qpair failed and we were unable to recover it. 00:37:46.865 [2024-11-18 12:06:12.646860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.646897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.866 [2024-11-18 12:06:12.647038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.647072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.866 [2024-11-18 12:06:12.647232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.647266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.866 [2024-11-18 12:06:12.647375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.647413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.866 [2024-11-18 12:06:12.647550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.647598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.866 [2024-11-18 12:06:12.647759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.647813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.866 [2024-11-18 12:06:12.647921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.647957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.866 [2024-11-18 12:06:12.648095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.648129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.866 [2024-11-18 12:06:12.648296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.648330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.866 [2024-11-18 12:06:12.648466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.648531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.866 [2024-11-18 12:06:12.648690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.648738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.866 [2024-11-18 12:06:12.648928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.648976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.866 [2024-11-18 12:06:12.649093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.649129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.866 [2024-11-18 12:06:12.649267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.649301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.866 [2024-11-18 12:06:12.649435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.649469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.866 [2024-11-18 12:06:12.649599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.649634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.866 [2024-11-18 12:06:12.649792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.649841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.866 [2024-11-18 12:06:12.649985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.650023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.866 [2024-11-18 12:06:12.650131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.650165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.866 [2024-11-18 12:06:12.650296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.650331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.866 [2024-11-18 12:06:12.650466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.650518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.866 [2024-11-18 12:06:12.650629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.650664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.866 [2024-11-18 12:06:12.650769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.650818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.866 [2024-11-18 12:06:12.650970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.651006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.866 [2024-11-18 12:06:12.651120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.651167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.866 [2024-11-18 12:06:12.651306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.651342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.866 [2024-11-18 12:06:12.651471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.651537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.866 [2024-11-18 12:06:12.651664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.651702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.866 [2024-11-18 12:06:12.651833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.651869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.866 [2024-11-18 12:06:12.652011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.652046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.866 [2024-11-18 12:06:12.652166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.652201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.866 [2024-11-18 12:06:12.652364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.652400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.866 [2024-11-18 12:06:12.652576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.652625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.866 [2024-11-18 12:06:12.652736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.652787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.866 [2024-11-18 12:06:12.652925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.652959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.866 [2024-11-18 12:06:12.653065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.653098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.866 [2024-11-18 12:06:12.653216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.653252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.866 [2024-11-18 12:06:12.653363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.866 [2024-11-18 12:06:12.653397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.866 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.653511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.867 [2024-11-18 12:06:12.653545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.867 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.653700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.867 [2024-11-18 12:06:12.653747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.867 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.653914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.867 [2024-11-18 12:06:12.653950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.867 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.654087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.867 [2024-11-18 12:06:12.654122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.867 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.654279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.867 [2024-11-18 12:06:12.654313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.867 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.654447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.867 [2024-11-18 12:06:12.654484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.867 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.654602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.867 [2024-11-18 12:06:12.654637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.867 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.654784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.867 [2024-11-18 12:06:12.654818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.867 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.654958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.867 [2024-11-18 12:06:12.654992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.867 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.655097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.867 [2024-11-18 12:06:12.655131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.867 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.655298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.867 [2024-11-18 12:06:12.655333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.867 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.655502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.867 [2024-11-18 12:06:12.655551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.867 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.655696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.867 [2024-11-18 12:06:12.655733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.867 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.655874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.867 [2024-11-18 12:06:12.655908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.867 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.656020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.867 [2024-11-18 12:06:12.656054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.867 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.656159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.867 [2024-11-18 12:06:12.656194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.867 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.656326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.867 [2024-11-18 12:06:12.656360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.867 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.656481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.867 [2024-11-18 12:06:12.656537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.867 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.656673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.867 [2024-11-18 12:06:12.656721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.867 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.656848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.867 [2024-11-18 12:06:12.656882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.867 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.657045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.867 [2024-11-18 12:06:12.657079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.867 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.657189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.867 [2024-11-18 12:06:12.657224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.867 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.657353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.867 [2024-11-18 12:06:12.657387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.867 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.657519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.867 [2024-11-18 12:06:12.657567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.867 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.657685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.867 [2024-11-18 12:06:12.657727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.867 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.657832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.867 [2024-11-18 12:06:12.657866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.867 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.658028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.867 [2024-11-18 12:06:12.658062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.867 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.658190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.867 [2024-11-18 12:06:12.658224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.867 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.658391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.867 [2024-11-18 12:06:12.658430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.867 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.658549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.867 [2024-11-18 12:06:12.658585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.867 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.658692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.867 [2024-11-18 12:06:12.658727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.867 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.658859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.867 [2024-11-18 12:06:12.658893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.867 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.658998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.867 [2024-11-18 12:06:12.659032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.867 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.659163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.867 [2024-11-18 12:06:12.659197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.867 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.659328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.867 [2024-11-18 12:06:12.659362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.867 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.659471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.867 [2024-11-18 12:06:12.659516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.867 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.659670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.867 [2024-11-18 12:06:12.659718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.867 qpair failed and we were unable to recover it. 00:37:46.867 [2024-11-18 12:06:12.659836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.659871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.868 [2024-11-18 12:06:12.659992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.660027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.868 [2024-11-18 12:06:12.660187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.660220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.868 [2024-11-18 12:06:12.660334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.660367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.868 [2024-11-18 12:06:12.660475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.660517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.868 [2024-11-18 12:06:12.660639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.660674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.868 [2024-11-18 12:06:12.660817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.660853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.868 [2024-11-18 12:06:12.660966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.661001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.868 [2024-11-18 12:06:12.661110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.661144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.868 [2024-11-18 12:06:12.661307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.661340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.868 [2024-11-18 12:06:12.661443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.661478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.868 [2024-11-18 12:06:12.661591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.661625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.868 [2024-11-18 12:06:12.661781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.661829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.868 [2024-11-18 12:06:12.661945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.661980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.868 [2024-11-18 12:06:12.662109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.662145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.868 [2024-11-18 12:06:12.662310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.662356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.868 [2024-11-18 12:06:12.662497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.662532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.868 [2024-11-18 12:06:12.662668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.662702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.868 [2024-11-18 12:06:12.662871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.662907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.868 [2024-11-18 12:06:12.663039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.663073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.868 [2024-11-18 12:06:12.663183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.663217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.868 [2024-11-18 12:06:12.663387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.663422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.868 [2024-11-18 12:06:12.663536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.663570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.868 [2024-11-18 12:06:12.663726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.663774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.868 [2024-11-18 12:06:12.663915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.663951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.868 [2024-11-18 12:06:12.664092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.664126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.868 [2024-11-18 12:06:12.664285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.664319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.868 [2024-11-18 12:06:12.664421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.664461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.868 [2024-11-18 12:06:12.664613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.664661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.868 [2024-11-18 12:06:12.664812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.664847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.868 [2024-11-18 12:06:12.664952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.664985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.868 [2024-11-18 12:06:12.665145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.665202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.868 [2024-11-18 12:06:12.665312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.665345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.868 [2024-11-18 12:06:12.665469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.665532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.868 [2024-11-18 12:06:12.665636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.665671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.868 [2024-11-18 12:06:12.665833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.665868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.868 [2024-11-18 12:06:12.665972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.666005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.868 [2024-11-18 12:06:12.666103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.868 [2024-11-18 12:06:12.666137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.868 qpair failed and we were unable to recover it. 00:37:46.869 [2024-11-18 12:06:12.666314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.869 [2024-11-18 12:06:12.666348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.869 qpair failed and we were unable to recover it. 00:37:46.869 [2024-11-18 12:06:12.666472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.869 [2024-11-18 12:06:12.666516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.869 qpair failed and we were unable to recover it. 00:37:46.869 [2024-11-18 12:06:12.666698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.869 [2024-11-18 12:06:12.666754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.869 qpair failed and we were unable to recover it. 00:37:46.869 [2024-11-18 12:06:12.666909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.869 [2024-11-18 12:06:12.666947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.869 qpair failed and we were unable to recover it. 00:37:46.869 [2024-11-18 12:06:12.667055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.869 [2024-11-18 12:06:12.667091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.869 qpair failed and we were unable to recover it. 00:37:46.869 [2024-11-18 12:06:12.667225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.869 [2024-11-18 12:06:12.667260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.869 qpair failed and we were unable to recover it. 00:37:46.869 [2024-11-18 12:06:12.667398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.869 [2024-11-18 12:06:12.667432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.869 qpair failed and we were unable to recover it. 00:37:46.869 [2024-11-18 12:06:12.667544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.869 [2024-11-18 12:06:12.667580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.869 qpair failed and we were unable to recover it. 00:37:46.869 [2024-11-18 12:06:12.667756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.869 [2024-11-18 12:06:12.667804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.869 qpair failed and we were unable to recover it. 00:37:46.869 [2024-11-18 12:06:12.667944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.869 [2024-11-18 12:06:12.667979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.869 qpair failed and we were unable to recover it. 00:37:46.869 [2024-11-18 12:06:12.668094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.869 [2024-11-18 12:06:12.668129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.869 qpair failed and we were unable to recover it. 00:37:46.869 [2024-11-18 12:06:12.668258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.869 [2024-11-18 12:06:12.668291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.869 qpair failed and we were unable to recover it. 00:37:46.869 [2024-11-18 12:06:12.668393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.869 [2024-11-18 12:06:12.668427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.869 qpair failed and we were unable to recover it. 00:37:46.869 [2024-11-18 12:06:12.668552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.869 [2024-11-18 12:06:12.668587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.869 qpair failed and we were unable to recover it. 00:37:46.869 [2024-11-18 12:06:12.668716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.869 [2024-11-18 12:06:12.668750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.869 qpair failed and we were unable to recover it. 00:37:46.869 [2024-11-18 12:06:12.668863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.869 [2024-11-18 12:06:12.668897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.869 qpair failed and we were unable to recover it. 00:37:46.869 [2024-11-18 12:06:12.669031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.869 [2024-11-18 12:06:12.669065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.869 qpair failed and we were unable to recover it. 00:37:46.869 [2024-11-18 12:06:12.669197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.869 [2024-11-18 12:06:12.669232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.869 qpair failed and we were unable to recover it. 00:37:46.869 [2024-11-18 12:06:12.669372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.869 [2024-11-18 12:06:12.669406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.869 qpair failed and we were unable to recover it. 00:37:46.869 [2024-11-18 12:06:12.669540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.869 [2024-11-18 12:06:12.669575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.869 qpair failed and we were unable to recover it. 00:37:46.869 [2024-11-18 12:06:12.669709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.869 [2024-11-18 12:06:12.669743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.869 qpair failed and we were unable to recover it. 00:37:46.869 [2024-11-18 12:06:12.669851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.869 [2024-11-18 12:06:12.669885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.869 qpair failed and we were unable to recover it. 00:37:46.869 [2024-11-18 12:06:12.670004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.869 [2024-11-18 12:06:12.670053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.869 qpair failed and we were unable to recover it. 00:37:46.869 [2024-11-18 12:06:12.670176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.869 [2024-11-18 12:06:12.670212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.869 qpair failed and we were unable to recover it. 00:37:46.869 [2024-11-18 12:06:12.670321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.869 [2024-11-18 12:06:12.670355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.869 qpair failed and we were unable to recover it. 00:37:46.869 [2024-11-18 12:06:12.670487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.869 [2024-11-18 12:06:12.670532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.869 qpair failed and we were unable to recover it. 00:37:46.869 [2024-11-18 12:06:12.670647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.869 [2024-11-18 12:06:12.670682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.869 qpair failed and we were unable to recover it. 00:37:46.869 [2024-11-18 12:06:12.670818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.869 [2024-11-18 12:06:12.670851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.869 qpair failed and we were unable to recover it. 00:37:46.869 [2024-11-18 12:06:12.670991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.869 [2024-11-18 12:06:12.671025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.869 qpair failed and we were unable to recover it. 00:37:46.869 [2024-11-18 12:06:12.671151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.869 [2024-11-18 12:06:12.671189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.869 qpair failed and we were unable to recover it. 00:37:46.869 [2024-11-18 12:06:12.671307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.869 [2024-11-18 12:06:12.671356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.869 qpair failed and we were unable to recover it. 00:37:46.869 [2024-11-18 12:06:12.671469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.869 [2024-11-18 12:06:12.671513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.869 qpair failed and we were unable to recover it. 00:37:46.869 [2024-11-18 12:06:12.671686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.869 [2024-11-18 12:06:12.671720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.869 qpair failed and we were unable to recover it. 00:37:46.869 [2024-11-18 12:06:12.671867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.870 [2024-11-18 12:06:12.671901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.870 qpair failed and we were unable to recover it. 00:37:46.870 [2024-11-18 12:06:12.672006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.870 [2024-11-18 12:06:12.672040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.870 qpair failed and we were unable to recover it. 00:37:46.870 [2024-11-18 12:06:12.672172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.870 [2024-11-18 12:06:12.672206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.870 qpair failed and we were unable to recover it. 00:37:46.870 [2024-11-18 12:06:12.672336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.870 [2024-11-18 12:06:12.672369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.870 qpair failed and we were unable to recover it. 00:37:46.870 [2024-11-18 12:06:12.672515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.870 [2024-11-18 12:06:12.672553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.870 qpair failed and we were unable to recover it. 00:37:46.870 [2024-11-18 12:06:12.672706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.870 [2024-11-18 12:06:12.672753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.870 qpair failed and we were unable to recover it. 00:37:46.870 [2024-11-18 12:06:12.672868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.870 [2024-11-18 12:06:12.672903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.870 qpair failed and we were unable to recover it. 00:37:46.870 [2024-11-18 12:06:12.673043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.870 [2024-11-18 12:06:12.673078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.870 qpair failed and we were unable to recover it. 00:37:46.870 [2024-11-18 12:06:12.673183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.870 [2024-11-18 12:06:12.673217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.870 qpair failed and we were unable to recover it. 00:37:46.870 [2024-11-18 12:06:12.673377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.870 [2024-11-18 12:06:12.673411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.870 qpair failed and we were unable to recover it. 00:37:46.870 [2024-11-18 12:06:12.673559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.870 [2024-11-18 12:06:12.673595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.870 qpair failed and we were unable to recover it. 00:37:46.870 [2024-11-18 12:06:12.673736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.870 [2024-11-18 12:06:12.673772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.870 qpair failed and we were unable to recover it. 00:37:46.870 [2024-11-18 12:06:12.673909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.870 [2024-11-18 12:06:12.673944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.870 qpair failed and we were unable to recover it. 00:37:46.870 [2024-11-18 12:06:12.674080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.870 [2024-11-18 12:06:12.674115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.870 qpair failed and we were unable to recover it. 00:37:46.870 [2024-11-18 12:06:12.674248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.870 [2024-11-18 12:06:12.674282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.870 qpair failed and we were unable to recover it. 00:37:46.870 [2024-11-18 12:06:12.674395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.870 [2024-11-18 12:06:12.674430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.870 qpair failed and we were unable to recover it. 00:37:46.870 [2024-11-18 12:06:12.674581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.870 [2024-11-18 12:06:12.674616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.870 qpair failed and we were unable to recover it. 00:37:46.870 [2024-11-18 12:06:12.674751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.870 [2024-11-18 12:06:12.674799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.870 qpair failed and we were unable to recover it. 00:37:46.870 [2024-11-18 12:06:12.674945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.870 [2024-11-18 12:06:12.674982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.870 qpair failed and we were unable to recover it. 00:37:46.870 [2024-11-18 12:06:12.675124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.870 [2024-11-18 12:06:12.675160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.870 qpair failed and we were unable to recover it. 00:37:46.870 [2024-11-18 12:06:12.675266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.870 [2024-11-18 12:06:12.675301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.870 qpair failed and we were unable to recover it. 00:37:46.870 [2024-11-18 12:06:12.675411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.870 [2024-11-18 12:06:12.675445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.870 qpair failed and we were unable to recover it. 00:37:46.870 [2024-11-18 12:06:12.675563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.870 [2024-11-18 12:06:12.675599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.870 qpair failed and we were unable to recover it. 00:37:46.870 [2024-11-18 12:06:12.675770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.870 [2024-11-18 12:06:12.675804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.870 qpair failed and we were unable to recover it. 00:37:46.870 [2024-11-18 12:06:12.675915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.870 [2024-11-18 12:06:12.675949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.870 qpair failed and we were unable to recover it. 00:37:46.870 [2024-11-18 12:06:12.676097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.870 [2024-11-18 12:06:12.676131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.870 qpair failed and we were unable to recover it. 00:37:46.870 [2024-11-18 12:06:12.676267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.870 [2024-11-18 12:06:12.676302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.870 qpair failed and we were unable to recover it. 00:37:46.870 [2024-11-18 12:06:12.676418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.870 [2024-11-18 12:06:12.676452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.870 qpair failed and we were unable to recover it. 00:37:46.870 [2024-11-18 12:06:12.676567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.870 [2024-11-18 12:06:12.676601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.870 qpair failed and we were unable to recover it. 00:37:46.870 [2024-11-18 12:06:12.676738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.870 [2024-11-18 12:06:12.676772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.870 qpair failed and we were unable to recover it. 00:37:46.870 [2024-11-18 12:06:12.676901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.870 [2024-11-18 12:06:12.676935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.870 qpair failed and we were unable to recover it. 00:37:46.870 [2024-11-18 12:06:12.677039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.870 [2024-11-18 12:06:12.677073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.870 qpair failed and we were unable to recover it. 00:37:46.870 [2024-11-18 12:06:12.677212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.870 [2024-11-18 12:06:12.677248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.870 qpair failed and we were unable to recover it. 00:37:46.870 [2024-11-18 12:06:12.677362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.870 [2024-11-18 12:06:12.677396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.870 qpair failed and we were unable to recover it. 00:37:46.870 [2024-11-18 12:06:12.677500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.870 [2024-11-18 12:06:12.677535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.870 qpair failed and we were unable to recover it. 00:37:46.870 [2024-11-18 12:06:12.677696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.870 [2024-11-18 12:06:12.677731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.870 qpair failed and we were unable to recover it. 00:37:46.871 [2024-11-18 12:06:12.677833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.871 [2024-11-18 12:06:12.677872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.871 qpair failed and we were unable to recover it. 00:37:46.871 [2024-11-18 12:06:12.678005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.871 [2024-11-18 12:06:12.678039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.871 qpair failed and we were unable to recover it. 00:37:46.871 [2024-11-18 12:06:12.678154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.871 [2024-11-18 12:06:12.678188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.871 qpair failed and we were unable to recover it. 00:37:46.871 [2024-11-18 12:06:12.678301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.871 [2024-11-18 12:06:12.678336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.871 qpair failed and we were unable to recover it. 00:37:46.871 [2024-11-18 12:06:12.678483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.871 [2024-11-18 12:06:12.678525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.871 qpair failed and we were unable to recover it. 00:37:46.871 [2024-11-18 12:06:12.678640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.871 [2024-11-18 12:06:12.678676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.871 qpair failed and we were unable to recover it. 00:37:46.871 [2024-11-18 12:06:12.678863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.871 [2024-11-18 12:06:12.678911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.871 qpair failed and we were unable to recover it. 00:37:46.871 [2024-11-18 12:06:12.679053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.871 [2024-11-18 12:06:12.679088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.871 qpair failed and we were unable to recover it. 00:37:46.871 [2024-11-18 12:06:12.679221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.871 [2024-11-18 12:06:12.679255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.871 qpair failed and we were unable to recover it. 00:37:46.871 [2024-11-18 12:06:12.679362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.871 [2024-11-18 12:06:12.679396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.871 qpair failed and we were unable to recover it. 00:37:46.871 [2024-11-18 12:06:12.679546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.871 [2024-11-18 12:06:12.679593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.871 qpair failed and we were unable to recover it. 00:37:46.871 [2024-11-18 12:06:12.679704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.871 [2024-11-18 12:06:12.679739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.871 qpair failed and we were unable to recover it. 00:37:46.871 [2024-11-18 12:06:12.679848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.871 [2024-11-18 12:06:12.679883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.871 qpair failed and we were unable to recover it. 00:37:46.871 [2024-11-18 12:06:12.680041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.871 [2024-11-18 12:06:12.680076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.871 qpair failed and we were unable to recover it. 00:37:46.871 [2024-11-18 12:06:12.680197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.871 [2024-11-18 12:06:12.680232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.871 qpair failed and we were unable to recover it. 00:37:46.871 [2024-11-18 12:06:12.680361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.871 [2024-11-18 12:06:12.680395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.871 qpair failed and we were unable to recover it. 00:37:46.871 [2024-11-18 12:06:12.680533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.871 [2024-11-18 12:06:12.680581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.871 qpair failed and we were unable to recover it. 00:37:46.871 [2024-11-18 12:06:12.680699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.871 [2024-11-18 12:06:12.680734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.871 qpair failed and we were unable to recover it. 00:37:46.871 [2024-11-18 12:06:12.680866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.871 [2024-11-18 12:06:12.680900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.871 qpair failed and we were unable to recover it. 00:37:46.871 [2024-11-18 12:06:12.681030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.871 [2024-11-18 12:06:12.681064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.871 qpair failed and we were unable to recover it. 00:37:46.871 [2024-11-18 12:06:12.681199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.871 [2024-11-18 12:06:12.681232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.871 qpair failed and we were unable to recover it. 00:37:46.871 [2024-11-18 12:06:12.681347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.871 [2024-11-18 12:06:12.681384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.871 qpair failed and we were unable to recover it. 00:37:46.871 [2024-11-18 12:06:12.681518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.871 [2024-11-18 12:06:12.681554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.871 qpair failed and we were unable to recover it. 00:37:46.871 [2024-11-18 12:06:12.681662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.871 [2024-11-18 12:06:12.681697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.871 qpair failed and we were unable to recover it. 00:37:46.871 [2024-11-18 12:06:12.681847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.871 [2024-11-18 12:06:12.681882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.871 qpair failed and we were unable to recover it. 00:37:46.871 [2024-11-18 12:06:12.682045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.871 [2024-11-18 12:06:12.682079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.871 qpair failed and we were unable to recover it. 00:37:47.136 [2024-11-18 12:06:12.682189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-11-18 12:06:12.682223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-11-18 12:06:12.682336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-11-18 12:06:12.682371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-11-18 12:06:12.682520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-11-18 12:06:12.682565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-11-18 12:06:12.682673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-11-18 12:06:12.682707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-11-18 12:06:12.682824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-11-18 12:06:12.682859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-11-18 12:06:12.682968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-11-18 12:06:12.683002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-11-18 12:06:12.683125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-11-18 12:06:12.683173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-11-18 12:06:12.683295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-11-18 12:06:12.683331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-11-18 12:06:12.683509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-11-18 12:06:12.683545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-11-18 12:06:12.683653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-11-18 12:06:12.683687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-11-18 12:06:12.683834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-11-18 12:06:12.683867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-11-18 12:06:12.684003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-11-18 12:06:12.684037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-11-18 12:06:12.684136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-11-18 12:06:12.684170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-11-18 12:06:12.684310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-11-18 12:06:12.684349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-11-18 12:06:12.684519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-11-18 12:06:12.684577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-11-18 12:06:12.684700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-11-18 12:06:12.684737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-11-18 12:06:12.684854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-11-18 12:06:12.684890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-11-18 12:06:12.685035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-11-18 12:06:12.685070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-11-18 12:06:12.685240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-11-18 12:06:12.685274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-11-18 12:06:12.685412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-11-18 12:06:12.685446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-11-18 12:06:12.685618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-11-18 12:06:12.685667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-11-18 12:06:12.685789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-11-18 12:06:12.685834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-11-18 12:06:12.685940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-11-18 12:06:12.685974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-11-18 12:06:12.686108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-11-18 12:06:12.686143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-11-18 12:06:12.686261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-11-18 12:06:12.686300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-11-18 12:06:12.686423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-11-18 12:06:12.686477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-11-18 12:06:12.686599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-11-18 12:06:12.686637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.136 qpair failed and we were unable to recover it. 00:37:47.136 [2024-11-18 12:06:12.686769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.136 [2024-11-18 12:06:12.686803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-11-18 12:06:12.686945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-11-18 12:06:12.686995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-11-18 12:06:12.687110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-11-18 12:06:12.687145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-11-18 12:06:12.687308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-11-18 12:06:12.687343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-11-18 12:06:12.687447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-11-18 12:06:12.687484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-11-18 12:06:12.687627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-11-18 12:06:12.687661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-11-18 12:06:12.687790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-11-18 12:06:12.687825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-11-18 12:06:12.687957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-11-18 12:06:12.687991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-11-18 12:06:12.688154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-11-18 12:06:12.688187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-11-18 12:06:12.688292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-11-18 12:06:12.688327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-11-18 12:06:12.688459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-11-18 12:06:12.688505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-11-18 12:06:12.688608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-11-18 12:06:12.688643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-11-18 12:06:12.688820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-11-18 12:06:12.688868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-11-18 12:06:12.689012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-11-18 12:06:12.689048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-11-18 12:06:12.689208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-11-18 12:06:12.689256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-11-18 12:06:12.689395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-11-18 12:06:12.689430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-11-18 12:06:12.689563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-11-18 12:06:12.689598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-11-18 12:06:12.689733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-11-18 12:06:12.689767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-11-18 12:06:12.689884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-11-18 12:06:12.689917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-11-18 12:06:12.690076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-11-18 12:06:12.690110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-11-18 12:06:12.690244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-11-18 12:06:12.690277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-11-18 12:06:12.690414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-11-18 12:06:12.690450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-11-18 12:06:12.690599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-11-18 12:06:12.690637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-11-18 12:06:12.690751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-11-18 12:06:12.690786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-11-18 12:06:12.690884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-11-18 12:06:12.690919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-11-18 12:06:12.691055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-11-18 12:06:12.691090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-11-18 12:06:12.691210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-11-18 12:06:12.691258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-11-18 12:06:12.691380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-11-18 12:06:12.691421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-11-18 12:06:12.691536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-11-18 12:06:12.691570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-11-18 12:06:12.691683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-11-18 12:06:12.691717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-11-18 12:06:12.691856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-11-18 12:06:12.691889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-11-18 12:06:12.692018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-11-18 12:06:12.692052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-11-18 12:06:12.692180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-11-18 12:06:12.692214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-11-18 12:06:12.692348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-11-18 12:06:12.692381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-11-18 12:06:12.692524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-11-18 12:06:12.692561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-11-18 12:06:12.692698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-11-18 12:06:12.692734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-11-18 12:06:12.692868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-11-18 12:06:12.692902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.137 [2024-11-18 12:06:12.693049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.137 [2024-11-18 12:06:12.693083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.137 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.693190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-11-18 12:06:12.693225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.693382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-11-18 12:06:12.693431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.693580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-11-18 12:06:12.693615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.693755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-11-18 12:06:12.693789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.693958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-11-18 12:06:12.693992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.694124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-11-18 12:06:12.694158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.694325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-11-18 12:06:12.694361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.694501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-11-18 12:06:12.694537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.694644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-11-18 12:06:12.694678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.694814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-11-18 12:06:12.694848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.694958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-11-18 12:06:12.694992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.695105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-11-18 12:06:12.695139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.695275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-11-18 12:06:12.695310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.695424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-11-18 12:06:12.695460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.695597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-11-18 12:06:12.695645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.695761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-11-18 12:06:12.695795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.695938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-11-18 12:06:12.695972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.696079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-11-18 12:06:12.696113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.696249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-11-18 12:06:12.696282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.696426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-11-18 12:06:12.696474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.696616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-11-18 12:06:12.696665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.696839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-11-18 12:06:12.696876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.697011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-11-18 12:06:12.697046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.697212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-11-18 12:06:12.697246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.697363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-11-18 12:06:12.697398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.697557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-11-18 12:06:12.697605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.697739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-11-18 12:06:12.697798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.697944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-11-18 12:06:12.697980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.698082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-11-18 12:06:12.698116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.698222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-11-18 12:06:12.698261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.698445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-11-18 12:06:12.698502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.698621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-11-18 12:06:12.698669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.698793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-11-18 12:06:12.698830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.698984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-11-18 12:06:12.699020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.699123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-11-18 12:06:12.699169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.699308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-11-18 12:06:12.699342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.699450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.138 [2024-11-18 12:06:12.699483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.138 qpair failed and we were unable to recover it. 00:37:47.138 [2024-11-18 12:06:12.699614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.699662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-11-18 12:06:12.699785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.699822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-11-18 12:06:12.699961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.699996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-11-18 12:06:12.700095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.700129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-11-18 12:06:12.700264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.700299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-11-18 12:06:12.700419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.700468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-11-18 12:06:12.700610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.700647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-11-18 12:06:12.700787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.700825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-11-18 12:06:12.700934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.700971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-11-18 12:06:12.701109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.701145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-11-18 12:06:12.701279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.701312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-11-18 12:06:12.701440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.701473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-11-18 12:06:12.701650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.701684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-11-18 12:06:12.701784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.701818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-11-18 12:06:12.701959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.701995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-11-18 12:06:12.702129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.702164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-11-18 12:06:12.702268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.702302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-11-18 12:06:12.702440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.702474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-11-18 12:06:12.702628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.702663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-11-18 12:06:12.702781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.702826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-11-18 12:06:12.702943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.702977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-11-18 12:06:12.703123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.703157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-11-18 12:06:12.703264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.703298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-11-18 12:06:12.703461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.703506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-11-18 12:06:12.703657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.703705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-11-18 12:06:12.703853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.703888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-11-18 12:06:12.704055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.704089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-11-18 12:06:12.704221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.704255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-11-18 12:06:12.704368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.704401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-11-18 12:06:12.704559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.704607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-11-18 12:06:12.704764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.704823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-11-18 12:06:12.704936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.704972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-11-18 12:06:12.705112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.705153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-11-18 12:06:12.705253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.705287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-11-18 12:06:12.705388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.705422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-11-18 12:06:12.705535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.705571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-11-18 12:06:12.705707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.705742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.139 [2024-11-18 12:06:12.705857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.139 [2024-11-18 12:06:12.705897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.139 qpair failed and we were unable to recover it. 00:37:47.140 [2024-11-18 12:06:12.706066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-11-18 12:06:12.706101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-11-18 12:06:12.706251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-11-18 12:06:12.706300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-11-18 12:06:12.706438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-11-18 12:06:12.706473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-11-18 12:06:12.706597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-11-18 12:06:12.706633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-11-18 12:06:12.706754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-11-18 12:06:12.706796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-11-18 12:06:12.706924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-11-18 12:06:12.706957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-11-18 12:06:12.707064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-11-18 12:06:12.707098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-11-18 12:06:12.707218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-11-18 12:06:12.707254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-11-18 12:06:12.707362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-11-18 12:06:12.707397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-11-18 12:06:12.707507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-11-18 12:06:12.707542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-11-18 12:06:12.707675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-11-18 12:06:12.707709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-11-18 12:06:12.707850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-11-18 12:06:12.707884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-11-18 12:06:12.708035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-11-18 12:06:12.708069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-11-18 12:06:12.708207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-11-18 12:06:12.708242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-11-18 12:06:12.708391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-11-18 12:06:12.708438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-11-18 12:06:12.708607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-11-18 12:06:12.708656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-11-18 12:06:12.708828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-11-18 12:06:12.708864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-11-18 12:06:12.708975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-11-18 12:06:12.709011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-11-18 12:06:12.709183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-11-18 12:06:12.709218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-11-18 12:06:12.709329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-11-18 12:06:12.709376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-11-18 12:06:12.709528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-11-18 12:06:12.709576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-11-18 12:06:12.709734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-11-18 12:06:12.709781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-11-18 12:06:12.709903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-11-18 12:06:12.709940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-11-18 12:06:12.710075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-11-18 12:06:12.710109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-11-18 12:06:12.710224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-11-18 12:06:12.710259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-11-18 12:06:12.710365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-11-18 12:06:12.710399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-11-18 12:06:12.710537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-11-18 12:06:12.710572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-11-18 12:06:12.710681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-11-18 12:06:12.710715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-11-18 12:06:12.710881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-11-18 12:06:12.710915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-11-18 12:06:12.711018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-11-18 12:06:12.711051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-11-18 12:06:12.711162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-11-18 12:06:12.711200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.140 [2024-11-18 12:06:12.711358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.140 [2024-11-18 12:06:12.711406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.140 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.711600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-11-18 12:06:12.711649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.711758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-11-18 12:06:12.711795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.711961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-11-18 12:06:12.712002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.712119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-11-18 12:06:12.712153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.712291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-11-18 12:06:12.712325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.712482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-11-18 12:06:12.712541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.712687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-11-18 12:06:12.712723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.712884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-11-18 12:06:12.712918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.713019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-11-18 12:06:12.713053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.713167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-11-18 12:06:12.713201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.713309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-11-18 12:06:12.713343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.713451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-11-18 12:06:12.713486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.713605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-11-18 12:06:12.713640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.713794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-11-18 12:06:12.713842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.713989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-11-18 12:06:12.714025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.714128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-11-18 12:06:12.714162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.714330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-11-18 12:06:12.714364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.714478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-11-18 12:06:12.714537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.714664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-11-18 12:06:12.714712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.714852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-11-18 12:06:12.714887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.715047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-11-18 12:06:12.715081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.715192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-11-18 12:06:12.715227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.715360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-11-18 12:06:12.715394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.715527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-11-18 12:06:12.715562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.715667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-11-18 12:06:12.715702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.715836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-11-18 12:06:12.715869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.716006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-11-18 12:06:12.716040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.716146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-11-18 12:06:12.716179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.716309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-11-18 12:06:12.716343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.716496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-11-18 12:06:12.716545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.716687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-11-18 12:06:12.716724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.716868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-11-18 12:06:12.716903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.717011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-11-18 12:06:12.717045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.717177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-11-18 12:06:12.717211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.717319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-11-18 12:06:12.717354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.717498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-11-18 12:06:12.717533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.717672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.141 [2024-11-18 12:06:12.717707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.141 qpair failed and we were unable to recover it. 00:37:47.141 [2024-11-18 12:06:12.717816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.717850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-11-18 12:06:12.717994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.718027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-11-18 12:06:12.718145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.718179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-11-18 12:06:12.718312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.718348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-11-18 12:06:12.718451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.718485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-11-18 12:06:12.718612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.718646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-11-18 12:06:12.718760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.718793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-11-18 12:06:12.718928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.718962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-11-18 12:06:12.719098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.719133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-11-18 12:06:12.719245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.719279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-11-18 12:06:12.719429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.719477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-11-18 12:06:12.719612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.719660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-11-18 12:06:12.719767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.719802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-11-18 12:06:12.719914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.719948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-11-18 12:06:12.720060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.720094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-11-18 12:06:12.720198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.720231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-11-18 12:06:12.720336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.720370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-11-18 12:06:12.720484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.720526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-11-18 12:06:12.720628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.720661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-11-18 12:06:12.720837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.720871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-11-18 12:06:12.721005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.721039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-11-18 12:06:12.721177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.721210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-11-18 12:06:12.721341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.721375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-11-18 12:06:12.721472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.721520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-11-18 12:06:12.721648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.721682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-11-18 12:06:12.721816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.721854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-11-18 12:06:12.721999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.722033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-11-18 12:06:12.722165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.722198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-11-18 12:06:12.722305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.722339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-11-18 12:06:12.722482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.722528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-11-18 12:06:12.722666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.722700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-11-18 12:06:12.722814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.722847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-11-18 12:06:12.722952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.722990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-11-18 12:06:12.723120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.723153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-11-18 12:06:12.723287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.723321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-11-18 12:06:12.723433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.723467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-11-18 12:06:12.723631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.723680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.142 [2024-11-18 12:06:12.723806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.142 [2024-11-18 12:06:12.723842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.142 qpair failed and we were unable to recover it. 00:37:47.143 [2024-11-18 12:06:12.723956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-11-18 12:06:12.723989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-11-18 12:06:12.724127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-11-18 12:06:12.724162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-11-18 12:06:12.724272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-11-18 12:06:12.724307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-11-18 12:06:12.724416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-11-18 12:06:12.724449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-11-18 12:06:12.724599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-11-18 12:06:12.724633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-11-18 12:06:12.724736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-11-18 12:06:12.724781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-11-18 12:06:12.724923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-11-18 12:06:12.724957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-11-18 12:06:12.725094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-11-18 12:06:12.725128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-11-18 12:06:12.725298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-11-18 12:06:12.725332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-11-18 12:06:12.725443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-11-18 12:06:12.725485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-11-18 12:06:12.725632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-11-18 12:06:12.725665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-11-18 12:06:12.725812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-11-18 12:06:12.725856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-11-18 12:06:12.725982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-11-18 12:06:12.726016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-11-18 12:06:12.726153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-11-18 12:06:12.726187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-11-18 12:06:12.726305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-11-18 12:06:12.726341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-11-18 12:06:12.726481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-11-18 12:06:12.726530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-11-18 12:06:12.726630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-11-18 12:06:12.726664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-11-18 12:06:12.726777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-11-18 12:06:12.726811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-11-18 12:06:12.726942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-11-18 12:06:12.726976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-11-18 12:06:12.727080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-11-18 12:06:12.727114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-11-18 12:06:12.727216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-11-18 12:06:12.727249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-11-18 12:06:12.727381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-11-18 12:06:12.727414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-11-18 12:06:12.727559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-11-18 12:06:12.727594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-11-18 12:06:12.727723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-11-18 12:06:12.727757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-11-18 12:06:12.727868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-11-18 12:06:12.727901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-11-18 12:06:12.728063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-11-18 12:06:12.728096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-11-18 12:06:12.728235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-11-18 12:06:12.728268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-11-18 12:06:12.728378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-11-18 12:06:12.728411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-11-18 12:06:12.728528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-11-18 12:06:12.728562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-11-18 12:06:12.728672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-11-18 12:06:12.728705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-11-18 12:06:12.728840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-11-18 12:06:12.728873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-11-18 12:06:12.729014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-11-18 12:06:12.729047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-11-18 12:06:12.729217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-11-18 12:06:12.729250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-11-18 12:06:12.729367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-11-18 12:06:12.729400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-11-18 12:06:12.729513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-11-18 12:06:12.729552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-11-18 12:06:12.729663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-11-18 12:06:12.729697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.143 qpair failed and we were unable to recover it. 00:37:47.143 [2024-11-18 12:06:12.729845] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:47.143 [2024-11-18 12:06:12.729885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.143 [2024-11-18 12:06:12.729894] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:47.144 [2024-11-18 12:06:12.729918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe8[2024-11-18 12:06:12.729919] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the0 with addr=10.0.0.2, port=4420 00:37:47.144 only 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-11-18 12:06:12.729945] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:47.144 [2024-11-18 12:06:12.729964] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:47.144 [2024-11-18 12:06:12.730082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-11-18 12:06:12.730126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-11-18 12:06:12.730227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-11-18 12:06:12.730262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-11-18 12:06:12.730400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-11-18 12:06:12.730434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-11-18 12:06:12.730583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-11-18 12:06:12.730617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-11-18 12:06:12.730776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-11-18 12:06:12.730825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-11-18 12:06:12.730936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-11-18 12:06:12.730972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-11-18 12:06:12.731112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-11-18 12:06:12.731147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-11-18 12:06:12.731308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-11-18 12:06:12.731344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-11-18 12:06:12.731471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-11-18 12:06:12.731512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-11-18 12:06:12.731672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-11-18 12:06:12.731720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-11-18 12:06:12.731899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-11-18 12:06:12.731934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-11-18 12:06:12.732072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-11-18 12:06:12.732106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-11-18 12:06:12.732215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-11-18 12:06:12.732249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-11-18 12:06:12.732383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-11-18 12:06:12.732417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-11-18 12:06:12.732550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-11-18 12:06:12.732585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-11-18 12:06:12.732671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:47.144 [2024-11-18 12:06:12.732696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-11-18 12:06:12.732730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-11-18 12:06:12.732730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:47.144 [2024-11-18 12:06:12.732775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:47.144 [2024-11-18 12:06:12.732780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:37:47.144 [2024-11-18 12:06:12.732847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-11-18 12:06:12.732880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-11-18 12:06:12.732997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-11-18 12:06:12.733034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-11-18 12:06:12.733158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-11-18 12:06:12.733193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-11-18 12:06:12.733307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-11-18 12:06:12.733341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-11-18 12:06:12.733465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-11-18 12:06:12.733523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-11-18 12:06:12.733671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-11-18 12:06:12.733707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-11-18 12:06:12.733831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-11-18 12:06:12.733865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-11-18 12:06:12.733981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-11-18 12:06:12.734015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-11-18 12:06:12.734149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-11-18 12:06:12.734182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-11-18 12:06:12.734291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-11-18 12:06:12.734325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-11-18 12:06:12.734427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-11-18 12:06:12.734460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-11-18 12:06:12.734592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-11-18 12:06:12.734640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-11-18 12:06:12.734764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-11-18 12:06:12.734813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-11-18 12:06:12.734930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-11-18 12:06:12.734967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-11-18 12:06:12.735078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-11-18 12:06:12.735113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-11-18 12:06:12.735255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-11-18 12:06:12.735290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-11-18 12:06:12.735412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-11-18 12:06:12.735446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 [2024-11-18 12:06:12.735557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-11-18 12:06:12.735592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.144 qpair failed and we were unable to recover it. 00:37:47.144 A controller has encountered a failure and is being reset. 00:37:47.144 [2024-11-18 12:06:12.735732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.144 [2024-11-18 12:06:12.735769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-11-18 12:06:12.735885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-11-18 12:06:12.735919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-11-18 12:06:12.736026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-11-18 12:06:12.736060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-11-18 12:06:12.736167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-11-18 12:06:12.736201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-11-18 12:06:12.736300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-11-18 12:06:12.736334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-11-18 12:06:12.736434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-11-18 12:06:12.736468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-11-18 12:06:12.736588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-11-18 12:06:12.736622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-11-18 12:06:12.736731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-11-18 12:06:12.736765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-11-18 12:06:12.736871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-11-18 12:06:12.736904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-11-18 12:06:12.737012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-11-18 12:06:12.737046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-11-18 12:06:12.737161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-11-18 12:06:12.737195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-11-18 12:06:12.737305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-11-18 12:06:12.737342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-11-18 12:06:12.737457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-11-18 12:06:12.737504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-11-18 12:06:12.737622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-11-18 12:06:12.737667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-11-18 12:06:12.737808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-11-18 12:06:12.737867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-11-18 12:06:12.738007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-11-18 12:06:12.738041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-11-18 12:06:12.738153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-11-18 12:06:12.738187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-11-18 12:06:12.738296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-11-18 12:06:12.738331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-11-18 12:06:12.738468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-11-18 12:06:12.738516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-11-18 12:06:12.738629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-11-18 12:06:12.738663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-11-18 12:06:12.738806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-11-18 12:06:12.738840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-11-18 12:06:12.738970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-11-18 12:06:12.739004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-11-18 12:06:12.739107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-11-18 12:06:12.739141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-11-18 12:06:12.739278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-11-18 12:06:12.739319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-11-18 12:06:12.739454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-11-18 12:06:12.739503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-11-18 12:06:12.739619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-11-18 12:06:12.739662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-11-18 12:06:12.739765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-11-18 12:06:12.739801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-11-18 12:06:12.739976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-11-18 12:06:12.740009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-11-18 12:06:12.740117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-11-18 12:06:12.740150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-11-18 12:06:12.740250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-11-18 12:06:12.740283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-11-18 12:06:12.740386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-11-18 12:06:12.740419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-11-18 12:06:12.740558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-11-18 12:06:12.740607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-11-18 12:06:12.740717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-11-18 12:06:12.740753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.145 qpair failed and we were unable to recover it. 00:37:47.145 [2024-11-18 12:06:12.740867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.145 [2024-11-18 12:06:12.740901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.741037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-11-18 12:06:12.741072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.741202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-11-18 12:06:12.741237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.741338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-11-18 12:06:12.741372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.741521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-11-18 12:06:12.741570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.741733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-11-18 12:06:12.741791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.741960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-11-18 12:06:12.741997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.742111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-11-18 12:06:12.742146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.742255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-11-18 12:06:12.742290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.742420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-11-18 12:06:12.742454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.742613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-11-18 12:06:12.742662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.742787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-11-18 12:06:12.742823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.742927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-11-18 12:06:12.742961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.743092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-11-18 12:06:12.743126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.743238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-11-18 12:06:12.743272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.743402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-11-18 12:06:12.743436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.743570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-11-18 12:06:12.743606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.743707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-11-18 12:06:12.743741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.743858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-11-18 12:06:12.743892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.743997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-11-18 12:06:12.744032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.744151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-11-18 12:06:12.744191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.744326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-11-18 12:06:12.744361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.744513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-11-18 12:06:12.744550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.744656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-11-18 12:06:12.744691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.744813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-11-18 12:06:12.744847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.744960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-11-18 12:06:12.744994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.745098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-11-18 12:06:12.745132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.745231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-11-18 12:06:12.745264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.745380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-11-18 12:06:12.745428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.745553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-11-18 12:06:12.745590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.745693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-11-18 12:06:12.745728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.745831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-11-18 12:06:12.745866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.745969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-11-18 12:06:12.746003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.746141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-11-18 12:06:12.746175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.746314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-11-18 12:06:12.746348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.746456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-11-18 12:06:12.746499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.746621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-11-18 12:06:12.746656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.746816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.146 [2024-11-18 12:06:12.746866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.146 qpair failed and we were unable to recover it. 00:37:47.146 [2024-11-18 12:06:12.746985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-11-18 12:06:12.747022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-11-18 12:06:12.747135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-11-18 12:06:12.747170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-11-18 12:06:12.747299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-11-18 12:06:12.747334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-11-18 12:06:12.747448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-11-18 12:06:12.747485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-11-18 12:06:12.747648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-11-18 12:06:12.747696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-11-18 12:06:12.747829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-11-18 12:06:12.747864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-11-18 12:06:12.747982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-11-18 12:06:12.748016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-11-18 12:06:12.748137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-11-18 12:06:12.748171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-11-18 12:06:12.748276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-11-18 12:06:12.748310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-11-18 12:06:12.748447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-11-18 12:06:12.748485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-11-18 12:06:12.748606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-11-18 12:06:12.748640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-11-18 12:06:12.748748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-11-18 12:06:12.748789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-11-18 12:06:12.748908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-11-18 12:06:12.748944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-11-18 12:06:12.749089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-11-18 12:06:12.749123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-11-18 12:06:12.749232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-11-18 12:06:12.749266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-11-18 12:06:12.749372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-11-18 12:06:12.749408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-11-18 12:06:12.749554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-11-18 12:06:12.749603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.147 qpair failed and we were unable to recover it. 00:37:47.147 [2024-11-18 12:06:12.749822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.147 [2024-11-18 12:06:12.749888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:47.147 [2024-11-18 12:06:12.749918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:37:47.147 [2024-11-18 12:06:12.749961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:37:47.147 [2024-11-18 12:06:12.749990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:37:47.147 [2024-11-18 12:06:12.750017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:37:47.147 [2024-11-18 12:06:12.750044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:37:47.147 Unable to reset the controller. 00:37:47.714 12:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:47.714 12:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:37:47.714 12:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:47.714 12:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:47.714 12:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:47.714 12:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:47.714 12:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:47.714 12:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.714 12:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:47.714 Malloc0 00:37:47.714 12:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.714 12:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:47.714 12:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.714 12:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:47.714 [2024-11-18 12:06:13.523524] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:47.714 12:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.714 12:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:47.714 12:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.714 12:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:47.714 12:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.714 12:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:47.714 12:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.714 12:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:47.714 12:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.714 12:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:47.714 12:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.714 12:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:47.714 [2024-11-18 12:06:13.553633] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:47.714 12:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.714 12:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:47.714 12:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.714 12:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:47.714 12:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.714 12:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3142266 00:37:47.972 Controller properly reset. 00:37:53.237 Initializing NVMe Controllers 00:37:53.237 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:53.237 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:53.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:37:53.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:37:53.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:37:53.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:37:53.237 Initialization complete. Launching workers. 00:37:53.237 Starting thread on core 1 00:37:53.237 Starting thread on core 2 00:37:53.237 Starting thread on core 3 00:37:53.237 Starting thread on core 0 00:37:53.237 12:06:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:37:53.237 00:37:53.237 real 0m11.600s 00:37:53.237 user 0m36.885s 00:37:53.237 sys 0m7.531s 00:37:53.237 12:06:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:53.237 12:06:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:53.237 ************************************ 00:37:53.237 END TEST nvmf_target_disconnect_tc2 00:37:53.237 ************************************ 00:37:53.237 12:06:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:37:53.237 12:06:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:37:53.237 12:06:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:37:53.237 12:06:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:53.237 12:06:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:37:53.237 12:06:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:53.237 12:06:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:37:53.237 12:06:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:53.237 12:06:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:53.237 rmmod nvme_tcp 00:37:53.237 rmmod nvme_fabrics 00:37:53.237 rmmod nvme_keyring 00:37:53.237 12:06:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:53.237 12:06:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:37:53.237 12:06:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:37:53.237 12:06:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3142836 ']' 00:37:53.237 12:06:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3142836 00:37:53.237 12:06:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3142836 ']' 00:37:53.237 12:06:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3142836 00:37:53.237 12:06:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:37:53.237 12:06:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:53.237 12:06:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3142836 00:37:53.237 12:06:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:37:53.237 12:06:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:37:53.237 12:06:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3142836' 00:37:53.237 killing process with pid 3142836 00:37:53.237 12:06:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3142836 00:37:53.237 12:06:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3142836 00:37:54.630 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:54.630 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:54.630 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:54.630 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:37:54.630 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:37:54.630 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:54.630 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:37:54.630 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:54.630 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:54.630 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:54.630 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:54.630 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:56.536 12:06:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:56.536 00:37:56.536 real 0m17.745s 00:37:56.536 user 1m5.170s 00:37:56.536 sys 0m10.315s 00:37:56.536 12:06:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:56.536 12:06:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:56.536 ************************************ 00:37:56.536 END TEST nvmf_target_disconnect 00:37:56.536 ************************************ 00:37:56.536 12:06:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:37:56.536 00:37:56.536 real 7m40.293s 00:37:56.536 user 19m53.713s 00:37:56.536 sys 1m33.470s 00:37:56.536 12:06:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:56.536 12:06:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:56.536 ************************************ 00:37:56.536 END TEST nvmf_host 00:37:56.536 ************************************ 00:37:56.536 12:06:22 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:37:56.536 12:06:22 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:37:56.536 12:06:22 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:37:56.536 12:06:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:56.536 12:06:22 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:56.536 12:06:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:56.536 ************************************ 00:37:56.536 START TEST nvmf_target_core_interrupt_mode 00:37:56.536 ************************************ 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:37:56.536 * Looking for test storage... 00:37:56.536 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:56.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.536 --rc genhtml_branch_coverage=1 00:37:56.536 --rc genhtml_function_coverage=1 00:37:56.536 --rc genhtml_legend=1 00:37:56.536 --rc geninfo_all_blocks=1 00:37:56.536 --rc geninfo_unexecuted_blocks=1 00:37:56.536 00:37:56.536 ' 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:56.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.536 --rc genhtml_branch_coverage=1 00:37:56.536 --rc genhtml_function_coverage=1 00:37:56.536 --rc genhtml_legend=1 00:37:56.536 --rc geninfo_all_blocks=1 00:37:56.536 --rc geninfo_unexecuted_blocks=1 00:37:56.536 00:37:56.536 ' 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:56.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.536 --rc genhtml_branch_coverage=1 00:37:56.536 --rc genhtml_function_coverage=1 00:37:56.536 --rc genhtml_legend=1 00:37:56.536 --rc geninfo_all_blocks=1 00:37:56.536 --rc geninfo_unexecuted_blocks=1 00:37:56.536 00:37:56.536 ' 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:56.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.536 --rc genhtml_branch_coverage=1 00:37:56.536 --rc genhtml_function_coverage=1 00:37:56.536 --rc genhtml_legend=1 00:37:56.536 --rc geninfo_all_blocks=1 00:37:56.536 --rc geninfo_unexecuted_blocks=1 00:37:56.536 00:37:56.536 ' 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:56.536 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:56.537 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:56.537 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:56.537 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:56.537 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:56.537 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:56.537 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:37:56.537 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:56.537 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:56.537 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:56.537 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.537 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.537 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.537 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:37:56.537 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.537 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:37:56.537 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:56.537 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:56.537 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:56.537 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:56.537 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:56.537 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:56.537 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:56.537 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:56.537 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:56.537 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:56.537 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:37:56.537 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:37:56.537 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:37:56.537 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:37:56.537 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:56.537 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:56.537 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:56.537 ************************************ 00:37:56.537 START TEST nvmf_abort 00:37:56.537 ************************************ 00:37:56.537 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:37:56.537 * Looking for test storage... 00:37:56.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:56.537 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:56.537 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:37:56.537 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:56.796 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:56.796 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:56.796 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:56.796 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:56.796 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:37:56.796 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:37:56.796 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:37:56.796 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:37:56.796 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:37:56.796 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:37:56.796 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:37:56.796 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:56.796 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:37:56.796 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:37:56.796 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:56.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.797 --rc genhtml_branch_coverage=1 00:37:56.797 --rc genhtml_function_coverage=1 00:37:56.797 --rc genhtml_legend=1 00:37:56.797 --rc geninfo_all_blocks=1 00:37:56.797 --rc geninfo_unexecuted_blocks=1 00:37:56.797 00:37:56.797 ' 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:56.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.797 --rc genhtml_branch_coverage=1 00:37:56.797 --rc genhtml_function_coverage=1 00:37:56.797 --rc genhtml_legend=1 00:37:56.797 --rc geninfo_all_blocks=1 00:37:56.797 --rc geninfo_unexecuted_blocks=1 00:37:56.797 00:37:56.797 ' 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:56.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.797 --rc genhtml_branch_coverage=1 00:37:56.797 --rc genhtml_function_coverage=1 00:37:56.797 --rc genhtml_legend=1 00:37:56.797 --rc geninfo_all_blocks=1 00:37:56.797 --rc geninfo_unexecuted_blocks=1 00:37:56.797 00:37:56.797 ' 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:56.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.797 --rc genhtml_branch_coverage=1 00:37:56.797 --rc genhtml_function_coverage=1 00:37:56.797 --rc genhtml_legend=1 00:37:56.797 --rc geninfo_all_blocks=1 00:37:56.797 --rc geninfo_unexecuted_blocks=1 00:37:56.797 00:37:56.797 ' 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:37:56.797 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:58.701 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:58.701 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:58.701 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:58.702 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:58.702 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:58.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:58.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:37:58.702 00:37:58.702 --- 10.0.0.2 ping statistics --- 00:37:58.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:58.702 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:58.702 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:58.702 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:37:58.702 00:37:58.702 --- 10.0.0.1 ping statistics --- 00:37:58.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:58.702 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:58.702 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:58.961 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:37:58.961 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:58.961 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:58.961 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:58.961 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3145774 00:37:58.961 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:37:58.961 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3145774 00:37:58.961 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3145774 ']' 00:37:58.961 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:58.961 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:58.961 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:58.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:58.961 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:58.961 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:58.961 [2024-11-18 12:06:24.692619] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:58.961 [2024-11-18 12:06:24.695091] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:37:58.961 [2024-11-18 12:06:24.695195] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:58.961 [2024-11-18 12:06:24.843496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:59.219 [2024-11-18 12:06:24.983990] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:59.219 [2024-11-18 12:06:24.984081] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:59.219 [2024-11-18 12:06:24.984110] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:59.219 [2024-11-18 12:06:24.984134] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:59.219 [2024-11-18 12:06:24.984157] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:59.219 [2024-11-18 12:06:24.986889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:59.219 [2024-11-18 12:06:24.986974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:59.219 [2024-11-18 12:06:24.986997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:59.477 [2024-11-18 12:06:25.361150] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:59.477 [2024-11-18 12:06:25.362332] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:59.477 [2024-11-18 12:06:25.363168] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:59.477 [2024-11-18 12:06:25.363541] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:00.045 12:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:00.045 12:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:38:00.045 12:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:00.045 12:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:00.045 12:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:00.045 12:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:00.045 12:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:38:00.045 12:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.045 12:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:00.045 [2024-11-18 12:06:25.696071] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:00.045 12:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.045 12:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:38:00.045 12:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.045 12:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:00.045 Malloc0 00:38:00.045 12:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.045 12:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:00.045 12:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.045 12:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:00.045 Delay0 00:38:00.045 12:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.045 12:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:00.045 12:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.045 12:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:00.045 12:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.045 12:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:38:00.045 12:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.045 12:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:00.045 12:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.045 12:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:00.045 12:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.045 12:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:00.045 [2024-11-18 12:06:25.832279] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:00.045 12:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.045 12:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:00.045 12:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.045 12:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:00.045 12:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.045 12:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:38:00.303 [2024-11-18 12:06:26.002785] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:38:02.830 Initializing NVMe Controllers 00:38:02.830 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:38:02.830 controller IO queue size 128 less than required 00:38:02.830 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:38:02.830 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:38:02.830 Initialization complete. Launching workers. 00:38:02.830 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 21254 00:38:02.830 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 21311, failed to submit 66 00:38:02.830 success 21254, unsuccessful 57, failed 0 00:38:02.830 12:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:02.830 12:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.830 12:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:02.830 12:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.830 12:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:38:02.830 12:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:38:02.830 12:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:02.830 12:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:38:02.830 12:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:02.830 12:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:38:02.830 12:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:02.830 12:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:02.830 rmmod nvme_tcp 00:38:02.830 rmmod nvme_fabrics 00:38:02.830 rmmod nvme_keyring 00:38:02.830 12:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:02.830 12:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:38:02.830 12:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:38:02.830 12:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3145774 ']' 00:38:02.830 12:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3145774 00:38:02.830 12:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3145774 ']' 00:38:02.830 12:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3145774 00:38:02.830 12:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:38:02.830 12:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:02.830 12:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3145774 00:38:02.830 12:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:02.830 12:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:02.830 12:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3145774' 00:38:02.830 killing process with pid 3145774 00:38:02.830 12:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3145774 00:38:02.830 12:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3145774 00:38:03.764 12:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:03.764 12:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:03.764 12:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:03.765 12:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:38:03.765 12:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:38:03.765 12:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:03.765 12:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:38:03.765 12:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:03.765 12:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:03.765 12:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:03.765 12:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:03.765 12:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:05.671 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:05.671 00:38:05.671 real 0m9.140s 00:38:05.671 user 0m11.521s 00:38:05.671 sys 0m3.167s 00:38:05.671 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:05.671 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:05.671 ************************************ 00:38:05.671 END TEST nvmf_abort 00:38:05.671 ************************************ 00:38:05.671 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:38:05.671 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:05.671 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:05.671 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:05.671 ************************************ 00:38:05.671 START TEST nvmf_ns_hotplug_stress 00:38:05.671 ************************************ 00:38:05.671 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:38:05.930 * Looking for test storage... 00:38:05.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:05.930 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:05.930 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:38:05.930 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:05.930 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:05.930 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:05.930 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:05.930 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:05.930 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:38:05.930 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:38:05.930 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:38:05.930 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:38:05.930 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:38:05.930 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:38:05.930 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:38:05.930 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:05.930 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:38:05.930 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:38:05.930 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:05.930 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:05.930 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:38:05.930 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:38:05.930 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:05.930 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:38:05.930 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:05.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:05.931 --rc genhtml_branch_coverage=1 00:38:05.931 --rc genhtml_function_coverage=1 00:38:05.931 --rc genhtml_legend=1 00:38:05.931 --rc geninfo_all_blocks=1 00:38:05.931 --rc geninfo_unexecuted_blocks=1 00:38:05.931 00:38:05.931 ' 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:05.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:05.931 --rc genhtml_branch_coverage=1 00:38:05.931 --rc genhtml_function_coverage=1 00:38:05.931 --rc genhtml_legend=1 00:38:05.931 --rc geninfo_all_blocks=1 00:38:05.931 --rc geninfo_unexecuted_blocks=1 00:38:05.931 00:38:05.931 ' 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:05.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:05.931 --rc genhtml_branch_coverage=1 00:38:05.931 --rc genhtml_function_coverage=1 00:38:05.931 --rc genhtml_legend=1 00:38:05.931 --rc geninfo_all_blocks=1 00:38:05.931 --rc geninfo_unexecuted_blocks=1 00:38:05.931 00:38:05.931 ' 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:05.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:05.931 --rc genhtml_branch_coverage=1 00:38:05.931 --rc genhtml_function_coverage=1 00:38:05.931 --rc genhtml_legend=1 00:38:05.931 --rc geninfo_all_blocks=1 00:38:05.931 --rc geninfo_unexecuted_blocks=1 00:38:05.931 00:38:05.931 ' 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:38:05.931 12:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:07.832 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:07.832 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:38:07.832 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:07.832 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:07.832 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:07.832 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:07.832 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:07.832 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:38:07.832 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:07.832 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:38:07.832 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:38:07.832 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:38:07.832 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:38:07.832 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:38:07.832 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:38:07.832 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:07.832 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:07.832 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:07.832 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:07.833 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:07.833 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:07.833 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:07.833 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:07.833 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:08.092 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:08.092 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:08.092 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:08.092 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:08.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:08.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:38:08.092 00:38:08.092 --- 10.0.0.2 ping statistics --- 00:38:08.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:08.092 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:38:08.092 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:08.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:08.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:38:08.092 00:38:08.092 --- 10.0.0.1 ping statistics --- 00:38:08.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:08.092 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:38:08.092 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:08.092 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:38:08.092 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:08.092 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:08.092 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:08.092 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:08.092 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:08.092 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:08.092 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:08.092 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:38:08.092 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:08.092 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:08.092 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:08.092 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3148254 00:38:08.092 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:38:08.092 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3148254 00:38:08.092 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3148254 ']' 00:38:08.092 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:08.092 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:08.092 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:08.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:08.092 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:08.092 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:08.092 [2024-11-18 12:06:33.879271] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:08.092 [2024-11-18 12:06:33.881909] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:38:08.092 [2024-11-18 12:06:33.882010] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:08.350 [2024-11-18 12:06:34.024314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:08.350 [2024-11-18 12:06:34.145383] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:08.350 [2024-11-18 12:06:34.145447] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:08.350 [2024-11-18 12:06:34.145486] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:08.350 [2024-11-18 12:06:34.145513] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:08.351 [2024-11-18 12:06:34.145549] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:08.351 [2024-11-18 12:06:34.147942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:08.351 [2024-11-18 12:06:34.147980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:08.351 [2024-11-18 12:06:34.147990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:08.609 [2024-11-18 12:06:34.493982] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:08.867 [2024-11-18 12:06:34.495087] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:08.867 [2024-11-18 12:06:34.495922] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:08.867 [2024-11-18 12:06:34.496269] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:09.126 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:09.126 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:38:09.126 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:09.126 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:09.126 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:09.126 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:09.126 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:38:09.126 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:09.385 [2024-11-18 12:06:35.217066] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:09.385 12:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:09.981 12:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:09.981 [2024-11-18 12:06:35.825587] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:09.981 12:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:10.548 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:38:10.806 Malloc0 00:38:10.806 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:11.064 Delay0 00:38:11.064 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:11.322 12:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:38:11.579 NULL1 00:38:11.579 12:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:38:11.837 12:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3148684 00:38:11.837 12:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148684 00:38:11.837 12:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:38:11.837 12:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:13.210 Read completed with error (sct=0, sc=11) 00:38:13.210 12:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:13.210 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:13.210 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:13.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:13.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:13.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:13.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:13.467 12:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:38:13.467 12:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:38:13.724 true 00:38:13.724 12:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148684 00:38:13.724 12:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:14.655 12:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:14.913 12:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:38:14.913 12:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:38:15.170 true 00:38:15.170 12:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148684 00:38:15.170 12:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:15.428 12:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:15.685 12:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:38:15.685 12:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:38:15.943 true 00:38:15.943 12:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148684 00:38:15.943 12:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:16.201 12:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:16.459 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:38:16.459 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:38:16.717 true 00:38:16.717 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148684 00:38:16.717 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:17.649 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:17.906 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:38:17.906 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:38:18.164 true 00:38:18.164 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148684 00:38:18.164 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:18.730 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:18.730 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:38:18.730 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:38:18.987 true 00:38:19.245 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148684 00:38:19.245 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:19.503 12:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:19.760 12:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:38:19.760 12:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:38:20.018 true 00:38:20.018 12:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148684 00:38:20.018 12:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:20.950 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:21.208 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:38:21.208 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:38:21.466 true 00:38:21.466 12:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148684 00:38:21.466 12:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:21.724 12:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:21.982 12:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:38:21.982 12:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:38:22.240 true 00:38:22.240 12:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148684 00:38:22.240 12:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:22.497 12:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:22.755 12:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:38:22.755 12:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:38:23.013 true 00:38:23.013 12:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148684 00:38:23.013 12:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:23.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:23.945 12:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:23.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:23.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:24.203 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:38:24.203 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:38:24.461 true 00:38:24.461 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148684 00:38:24.461 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:24.719 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:24.977 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:38:24.977 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:38:25.236 true 00:38:25.236 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148684 00:38:25.236 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:26.170 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:26.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:26.427 12:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:38:26.427 12:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:38:26.685 true 00:38:26.685 12:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148684 00:38:26.685 12:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:26.943 12:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:27.201 12:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:38:27.201 12:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:38:27.459 true 00:38:27.459 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148684 00:38:27.459 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:28.392 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:28.392 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:28.650 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:38:28.650 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:38:28.908 true 00:38:28.908 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148684 00:38:28.908 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:29.166 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:29.423 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:38:29.424 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:38:29.681 true 00:38:29.681 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148684 00:38:29.681 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:29.939 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:30.197 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:38:30.197 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:38:30.455 true 00:38:30.455 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148684 00:38:30.455 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:31.393 12:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:31.651 12:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:38:31.651 12:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:38:31.909 true 00:38:31.909 12:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148684 00:38:31.909 12:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:32.167 12:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:32.426 12:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:38:32.426 12:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:38:32.992 true 00:38:32.992 12:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148684 00:38:32.992 12:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:32.992 12:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:33.249 12:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:38:33.249 12:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:38:33.506 true 00:38:33.506 12:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148684 00:38:33.506 12:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:34.880 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:34.880 12:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:34.880 12:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:38:34.880 12:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:38:35.138 true 00:38:35.138 12:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148684 00:38:35.138 12:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:35.396 12:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:35.654 12:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:38:35.654 12:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:38:35.912 true 00:38:35.912 12:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148684 00:38:35.912 12:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:36.170 12:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:36.428 12:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:38:36.428 12:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:38:36.686 true 00:38:36.686 12:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148684 00:38:36.686 12:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:38.060 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:38.060 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:38.060 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:38:38.060 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:38:38.318 true 00:38:38.318 12:07:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148684 00:38:38.318 12:07:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:38.576 12:07:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:38.834 12:07:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:38:38.834 12:07:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:38:39.092 true 00:38:39.092 12:07:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148684 00:38:39.092 12:07:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:39.659 12:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:39.659 12:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:38:39.659 12:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:38:39.917 true 00:38:39.917 12:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148684 00:38:39.917 12:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:40.853 12:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:41.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:41.112 12:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:38:41.370 12:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:38:41.370 true 00:38:41.628 12:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148684 00:38:41.628 12:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:41.886 12:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:42.143 12:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:38:42.143 12:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:38:42.401 Initializing NVMe Controllers 00:38:42.401 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:42.401 Controller IO queue size 128, less than required. 00:38:42.401 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:42.401 Controller IO queue size 128, less than required. 00:38:42.401 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:42.401 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:42.401 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:38:42.401 Initialization complete. Launching workers. 00:38:42.401 ======================================================== 00:38:42.401 Latency(us) 00:38:42.401 Device Information : IOPS MiB/s Average min max 00:38:42.401 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 534.85 0.26 99475.45 4192.26 1019536.87 00:38:42.401 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 6551.26 3.20 19540.15 3003.76 479401.85 00:38:42.401 ======================================================== 00:38:42.401 Total : 7086.11 3.46 25573.54 3003.76 1019536.87 00:38:42.401 00:38:42.401 true 00:38:42.401 12:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148684 00:38:42.401 12:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:42.659 12:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:42.918 12:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:38:42.918 12:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:38:43.176 true 00:38:43.176 12:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3148684 00:38:43.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3148684) - No such process 00:38:43.176 12:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3148684 00:38:43.177 12:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:43.435 12:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:43.692 12:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:38:43.692 12:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:38:43.692 12:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:38:43.692 12:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:43.692 12:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:38:43.972 null0 00:38:43.972 12:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:43.972 12:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:43.972 12:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:38:44.241 null1 00:38:44.241 12:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:44.241 12:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:44.241 12:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:38:44.509 null2 00:38:44.509 12:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:44.509 12:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:44.509 12:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:38:44.768 null3 00:38:44.768 12:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:44.768 12:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:44.768 12:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:38:45.026 null4 00:38:45.027 12:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:45.027 12:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:45.027 12:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:38:45.285 null5 00:38:45.285 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:45.285 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:45.285 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:38:45.543 null6 00:38:45.543 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:45.543 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:45.543 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:38:45.802 null7 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:45.802 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:38:45.803 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:45.803 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:38:45.803 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:45.803 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:45.803 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.803 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:45.803 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:45.803 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:38:45.803 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:45.803 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:38:45.803 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:45.803 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:45.803 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3152706 3152707 3152709 3152711 3152713 3152715 3152717 3152719 00:38:45.803 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.803 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:46.370 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:46.370 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:46.370 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:46.370 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:46.370 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:46.370 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:46.370 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:46.370 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:46.370 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.370 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.370 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:46.629 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.629 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.629 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:46.629 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.629 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.629 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:46.629 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.629 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.629 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:46.629 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.629 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.629 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:46.629 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.629 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.629 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:46.629 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.629 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.629 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:46.629 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.629 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.629 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:46.887 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:46.887 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:46.887 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:46.887 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:46.887 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:46.887 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:46.887 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:46.887 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:47.145 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.145 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.145 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:47.145 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.145 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.145 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:47.145 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.145 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.145 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:47.145 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.145 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.145 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.145 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:47.145 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.145 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.145 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:47.145 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.145 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:47.145 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.145 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.145 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:47.145 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.145 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.145 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:47.404 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:47.404 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:47.404 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:47.404 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:47.404 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:47.404 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:47.404 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:47.404 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:47.662 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.662 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.662 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:47.662 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.662 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.662 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:47.662 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.662 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.662 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:47.662 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.662 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.662 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:47.662 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.662 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.662 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:47.662 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.662 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.662 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.662 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:47.662 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.662 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:47.662 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.662 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.662 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:47.919 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:47.919 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:47.919 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:47.919 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:47.919 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:47.919 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:47.919 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:47.919 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:48.176 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.177 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.177 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:48.177 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.177 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.177 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:48.177 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.177 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.177 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.177 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:48.177 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.177 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:48.435 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.435 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.435 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:48.435 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.435 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.435 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:48.435 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.435 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.435 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:48.435 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.435 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.435 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:48.693 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:48.693 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:48.693 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:48.693 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:48.693 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:48.693 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:48.693 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:48.693 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:48.951 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.951 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.951 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:48.951 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.951 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.951 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:48.951 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.951 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.951 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:48.951 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.951 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.951 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:48.951 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.951 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.951 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:48.951 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.951 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.951 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:48.951 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.951 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.952 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:48.952 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.952 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.952 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:49.210 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:49.210 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:49.210 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:49.210 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:49.210 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:49.210 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:49.210 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:49.210 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:49.469 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.469 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.469 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:49.469 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.469 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.469 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:49.469 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.469 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.469 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:49.469 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.469 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.469 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:49.469 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.469 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.469 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:49.469 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.469 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.469 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:49.469 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.469 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.469 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:49.469 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.469 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.469 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:49.727 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:49.728 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:49.728 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:49.728 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:49.728 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:49.728 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:49.728 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:49.728 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:49.986 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.986 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.986 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:49.986 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.986 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.986 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:49.986 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.986 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.986 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:49.986 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.986 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.986 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:49.986 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.986 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.986 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:49.986 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.986 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.986 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:49.986 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.986 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.986 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:50.246 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.246 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.246 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:50.503 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:50.503 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:50.503 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:50.503 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:50.503 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:50.503 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:50.503 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:50.503 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:50.762 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.762 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.762 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:50.762 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.762 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.762 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:50.762 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.762 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.762 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:50.762 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.762 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.762 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.762 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:50.762 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.762 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:50.762 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.762 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.762 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.762 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:50.762 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.762 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:50.762 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.762 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.762 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:51.020 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:51.020 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:51.020 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:51.020 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:51.020 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:51.020 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:51.020 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:51.020 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:51.278 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:51.278 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:51.278 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:51.278 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:51.278 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:51.278 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:51.278 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:51.279 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:51.279 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:51.279 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:51.279 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:51.279 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:51.279 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:51.279 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:51.279 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:51.279 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:51.279 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:51.279 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:51.279 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:51.279 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:51.279 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:51.279 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:51.279 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:51.279 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:51.537 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:51.537 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:51.537 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:51.537 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:51.537 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:51.537 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:51.537 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:51.537 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:51.795 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:51.795 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:51.795 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:51.795 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:51.795 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:51.795 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:51.795 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:51.795 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:51.795 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:51.795 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:51.795 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:51.795 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:51.795 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:51.795 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:51.795 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:51.795 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:51.795 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:38:51.795 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:38:51.795 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:51.795 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:38:51.795 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:51.795 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:38:51.795 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:51.795 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:51.795 rmmod nvme_tcp 00:38:52.053 rmmod nvme_fabrics 00:38:52.053 rmmod nvme_keyring 00:38:52.053 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:52.053 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:38:52.053 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:38:52.053 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3148254 ']' 00:38:52.053 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3148254 00:38:52.053 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3148254 ']' 00:38:52.053 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3148254 00:38:52.053 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:38:52.053 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:52.053 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3148254 00:38:52.053 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:52.053 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:52.053 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3148254' 00:38:52.053 killing process with pid 3148254 00:38:52.053 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3148254 00:38:52.053 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3148254 00:38:53.428 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:53.428 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:53.428 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:53.428 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:38:53.428 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:38:53.428 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:53.428 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:38:53.428 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:53.428 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:53.428 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:53.428 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:53.428 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:55.331 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:55.331 00:38:55.331 real 0m49.402s 00:38:55.331 user 3m20.792s 00:38:55.331 sys 0m21.948s 00:38:55.331 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:55.331 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:55.331 ************************************ 00:38:55.331 END TEST nvmf_ns_hotplug_stress 00:38:55.331 ************************************ 00:38:55.331 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:38:55.331 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:55.331 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:55.331 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:55.331 ************************************ 00:38:55.331 START TEST nvmf_delete_subsystem 00:38:55.331 ************************************ 00:38:55.331 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:38:55.331 * Looking for test storage... 00:38:55.331 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:55.331 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:55.331 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:38:55.331 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:55.331 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:55.331 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:55.331 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:55.331 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:55.331 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:38:55.331 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:38:55.331 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:55.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:55.332 --rc genhtml_branch_coverage=1 00:38:55.332 --rc genhtml_function_coverage=1 00:38:55.332 --rc genhtml_legend=1 00:38:55.332 --rc geninfo_all_blocks=1 00:38:55.332 --rc geninfo_unexecuted_blocks=1 00:38:55.332 00:38:55.332 ' 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:55.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:55.332 --rc genhtml_branch_coverage=1 00:38:55.332 --rc genhtml_function_coverage=1 00:38:55.332 --rc genhtml_legend=1 00:38:55.332 --rc geninfo_all_blocks=1 00:38:55.332 --rc geninfo_unexecuted_blocks=1 00:38:55.332 00:38:55.332 ' 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:55.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:55.332 --rc genhtml_branch_coverage=1 00:38:55.332 --rc genhtml_function_coverage=1 00:38:55.332 --rc genhtml_legend=1 00:38:55.332 --rc geninfo_all_blocks=1 00:38:55.332 --rc geninfo_unexecuted_blocks=1 00:38:55.332 00:38:55.332 ' 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:55.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:55.332 --rc genhtml_branch_coverage=1 00:38:55.332 --rc genhtml_function_coverage=1 00:38:55.332 --rc genhtml_legend=1 00:38:55.332 --rc geninfo_all_blocks=1 00:38:55.332 --rc geninfo_unexecuted_blocks=1 00:38:55.332 00:38:55.332 ' 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:55.332 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:55.333 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:55.333 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:55.333 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:55.333 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:55.333 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:55.333 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:55.333 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:55.333 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:55.333 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:38:55.333 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:57.865 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:57.865 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:38:57.865 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:57.865 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:57.865 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:57.865 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:57.865 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:57.865 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:38:57.865 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:57.865 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:38:57.865 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:38:57.865 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:38:57.865 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:38:57.865 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:38:57.865 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:38:57.865 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:57.865 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:57.865 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:57.865 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:57.865 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:57.865 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:57.865 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:57.865 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:57.865 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:57.865 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:57.865 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:57.865 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:57.865 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:57.865 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:57.865 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:57.865 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:57.865 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:57.865 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:57.866 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:57.866 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:57.866 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:57.866 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:57.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:57.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:38:57.866 00:38:57.866 --- 10.0.0.2 ping statistics --- 00:38:57.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:57.866 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:57.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:57.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:38:57.866 00:38:57.866 --- 10.0.0.1 ping statistics --- 00:38:57.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:57.866 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3155707 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3155707 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3155707 ']' 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:57.866 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:57.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:57.867 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:57.867 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:57.867 [2024-11-18 12:07:23.399036] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:57.867 [2024-11-18 12:07:23.401609] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:38:57.867 [2024-11-18 12:07:23.401715] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:57.867 [2024-11-18 12:07:23.549428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:57.867 [2024-11-18 12:07:23.681759] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:57.867 [2024-11-18 12:07:23.681860] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:57.867 [2024-11-18 12:07:23.681889] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:57.867 [2024-11-18 12:07:23.681911] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:57.867 [2024-11-18 12:07:23.681942] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:57.867 [2024-11-18 12:07:23.688545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:57.867 [2024-11-18 12:07:23.688551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:58.433 [2024-11-18 12:07:24.058065] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:58.433 [2024-11-18 12:07:24.058852] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:58.433 [2024-11-18 12:07:24.059200] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:58.691 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:58.691 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:38:58.691 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:58.691 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:58.691 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:58.691 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:58.691 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:58.691 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.691 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:58.691 [2024-11-18 12:07:24.417718] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:58.691 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.691 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:58.691 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.691 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:58.691 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.691 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:58.691 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.691 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:58.691 [2024-11-18 12:07:24.438011] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:58.691 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.691 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:38:58.691 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.691 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:58.691 NULL1 00:38:58.691 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.691 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:58.691 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.691 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:58.691 Delay0 00:38:58.691 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.691 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:58.691 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.691 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:58.691 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.691 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3155858 00:38:58.692 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:38:58.692 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:38:58.692 [2024-11-18 12:07:24.567104] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:00.590 12:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:00.590 12:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:00.590 12:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 starting I/O failed: -6 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 starting I/O failed: -6 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Write completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 starting I/O failed: -6 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Write completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 starting I/O failed: -6 00:39:01.156 Write completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 starting I/O failed: -6 00:39:01.156 Write completed with error (sct=0, sc=8) 00:39:01.156 Write completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 starting I/O failed: -6 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Write completed with error (sct=0, sc=8) 00:39:01.156 Write completed with error (sct=0, sc=8) 00:39:01.156 starting I/O failed: -6 00:39:01.156 Write completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 starting I/O failed: -6 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Write completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 starting I/O failed: -6 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Write completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 starting I/O failed: -6 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 [2024-11-18 12:07:26.750140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(6) to be set 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Write completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Write completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Write completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Write completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Write completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Write completed with error (sct=0, sc=8) 00:39:01.156 Write completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Write completed with error (sct=0, sc=8) 00:39:01.156 Write completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Write completed with error (sct=0, sc=8) 00:39:01.156 Write completed with error (sct=0, sc=8) 00:39:01.156 Write completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Write completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Write completed with error (sct=0, sc=8) 00:39:01.156 Write completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Read completed with error (sct=0, sc=8) 00:39:01.156 Write completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Write completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Write completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Write completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Write completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Write completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Write completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Write completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Write completed with error (sct=0, sc=8) 00:39:01.157 Write completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Write completed with error (sct=0, sc=8) 00:39:01.157 Write completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Write completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Write completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 [2024-11-18 12:07:26.751664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020600 is same with the state(6) to be set 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Write completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 starting I/O failed: -6 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 starting I/O failed: -6 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Write completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 starting I/O failed: -6 00:39:01.157 Write completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Write completed with error (sct=0, sc=8) 00:39:01.157 starting I/O failed: -6 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 starting I/O failed: -6 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 starting I/O failed: -6 00:39:01.157 Write completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Write completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 starting I/O failed: -6 00:39:01.157 Write completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Write completed with error (sct=0, sc=8) 00:39:01.157 starting I/O failed: -6 00:39:01.157 Write completed with error (sct=0, sc=8) 00:39:01.157 Write completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Write completed with error (sct=0, sc=8) 00:39:01.157 starting I/O failed: -6 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Write completed with error (sct=0, sc=8) 00:39:01.157 Write completed with error (sct=0, sc=8) 00:39:01.157 Write completed with error (sct=0, sc=8) 00:39:01.157 starting I/O failed: -6 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 starting I/O failed: -6 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Write completed with error (sct=0, sc=8) 00:39:01.157 starting I/O failed: -6 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Write completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 starting I/O failed: -6 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Write completed with error (sct=0, sc=8) 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 Write completed with error (sct=0, sc=8) 00:39:01.157 starting I/O failed: -6 00:39:01.157 Read completed with error (sct=0, sc=8) 00:39:01.157 [2024-11-18 12:07:26.752883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016880 is same with the state(6) to be set 00:39:02.092 [2024-11-18 12:07:27.715112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015c00 is same with the state(6) to be set 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 [2024-11-18 12:07:27.749969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(6) to be set 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 [2024-11-18 12:07:27.750729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016b00 is same with the state(6) to be set 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 [2024-11-18 12:07:27.751449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016600 is same with the state(6) to be set 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 Read completed with error (sct=0, sc=8) 00:39:02.092 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:02.092 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:39:02.092 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3155858 00:39:02.092 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:39:02.092 Write completed with error (sct=0, sc=8) 00:39:02.092 [2024-11-18 12:07:27.756232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020380 is same with the state(6) to be set 00:39:02.092 Initializing NVMe Controllers 00:39:02.092 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:02.092 Controller IO queue size 128, less than required. 00:39:02.093 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:02.093 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:02.093 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:02.093 Initialization complete. Launching workers. 00:39:02.093 ======================================================== 00:39:02.093 Latency(us) 00:39:02.093 Device Information : IOPS MiB/s Average min max 00:39:02.093 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 194.75 0.10 945197.27 2128.66 1018448.01 00:39:02.093 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.19 0.08 870117.54 838.45 1017246.50 00:39:02.093 ======================================================== 00:39:02.093 Total : 351.94 0.17 911664.47 838.45 1018448.01 00:39:02.093 00:39:02.093 [2024-11-18 12:07:27.757928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000015c00 (9): Bad file descriptor 00:39:02.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:39:02.660 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:39:02.660 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3155858 00:39:02.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3155858) - No such process 00:39:02.660 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3155858 00:39:02.660 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:39:02.660 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3155858 00:39:02.660 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:39:02.660 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:02.660 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:39:02.660 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:02.660 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3155858 00:39:02.660 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:39:02.660 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:02.660 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:02.660 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:02.660 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:02.660 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:02.660 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:02.660 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:02.660 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:02.660 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:02.660 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:02.660 [2024-11-18 12:07:28.273965] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:02.660 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:02.660 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:02.660 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:02.660 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:02.660 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:02.660 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3156264 00:39:02.660 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:39:02.660 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:39:02.660 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3156264 00:39:02.660 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:02.660 [2024-11-18 12:07:28.384236] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:02.918 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:02.918 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3156264 00:39:02.918 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:03.483 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:03.483 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3156264 00:39:03.483 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:04.047 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:04.047 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3156264 00:39:04.047 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:04.612 12:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:04.612 12:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3156264 00:39:04.612 12:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:05.218 12:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:05.218 12:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3156264 00:39:05.218 12:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:05.497 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:05.497 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3156264 00:39:05.497 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:05.754 Initializing NVMe Controllers 00:39:05.754 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:05.754 Controller IO queue size 128, less than required. 00:39:05.754 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:05.754 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:05.754 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:05.754 Initialization complete. Launching workers. 00:39:05.754 ======================================================== 00:39:05.754 Latency(us) 00:39:05.754 Device Information : IOPS MiB/s Average min max 00:39:05.754 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005641.93 1000313.86 1016596.59 00:39:05.754 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1008211.95 1000241.72 1046903.53 00:39:05.754 ======================================================== 00:39:05.754 Total : 256.00 0.12 1006926.94 1000241.72 1046903.53 00:39:05.754 00:39:06.012 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:06.012 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3156264 00:39:06.012 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3156264) - No such process 00:39:06.012 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3156264 00:39:06.012 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:39:06.012 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:39:06.012 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:06.012 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:39:06.012 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:06.012 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:39:06.012 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:06.012 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:06.012 rmmod nvme_tcp 00:39:06.012 rmmod nvme_fabrics 00:39:06.012 rmmod nvme_keyring 00:39:06.012 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:06.012 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:39:06.012 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:39:06.012 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3155707 ']' 00:39:06.012 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3155707 00:39:06.012 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3155707 ']' 00:39:06.012 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3155707 00:39:06.012 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:39:06.012 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:06.012 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3155707 00:39:06.269 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:06.269 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:06.269 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3155707' 00:39:06.269 killing process with pid 3155707 00:39:06.269 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3155707 00:39:06.269 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3155707 00:39:07.203 12:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:07.203 12:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:07.203 12:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:07.203 12:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:39:07.203 12:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:07.203 12:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:39:07.203 12:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:39:07.203 12:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:07.203 12:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:07.203 12:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:07.203 12:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:07.203 12:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:09.105 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:09.105 00:39:09.105 real 0m13.985s 00:39:09.105 user 0m26.410s 00:39:09.105 sys 0m3.976s 00:39:09.105 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:09.105 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:09.105 ************************************ 00:39:09.105 END TEST nvmf_delete_subsystem 00:39:09.105 ************************************ 00:39:09.364 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:39:09.364 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:09.364 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:09.364 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:09.364 ************************************ 00:39:09.364 START TEST nvmf_host_management 00:39:09.364 ************************************ 00:39:09.364 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:39:09.364 * Looking for test storage... 00:39:09.364 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:09.364 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:09.364 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:39:09.364 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:09.364 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:09.364 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:09.364 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:09.364 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:09.364 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:39:09.364 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:39:09.364 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:39:09.364 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:39:09.364 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:39:09.364 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:39:09.364 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:39:09.364 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:09.364 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:39:09.364 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:39:09.364 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:09.364 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:09.364 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:39:09.364 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:39:09.364 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:09.364 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:39:09.364 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:39:09.364 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:39:09.364 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:39:09.364 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:09.364 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:39:09.364 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:39:09.364 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:09.364 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:09.364 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:39:09.364 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:09.364 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:09.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:09.364 --rc genhtml_branch_coverage=1 00:39:09.364 --rc genhtml_function_coverage=1 00:39:09.364 --rc genhtml_legend=1 00:39:09.364 --rc geninfo_all_blocks=1 00:39:09.364 --rc geninfo_unexecuted_blocks=1 00:39:09.364 00:39:09.364 ' 00:39:09.364 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:09.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:09.365 --rc genhtml_branch_coverage=1 00:39:09.365 --rc genhtml_function_coverage=1 00:39:09.365 --rc genhtml_legend=1 00:39:09.365 --rc geninfo_all_blocks=1 00:39:09.365 --rc geninfo_unexecuted_blocks=1 00:39:09.365 00:39:09.365 ' 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:09.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:09.365 --rc genhtml_branch_coverage=1 00:39:09.365 --rc genhtml_function_coverage=1 00:39:09.365 --rc genhtml_legend=1 00:39:09.365 --rc geninfo_all_blocks=1 00:39:09.365 --rc geninfo_unexecuted_blocks=1 00:39:09.365 00:39:09.365 ' 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:09.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:09.365 --rc genhtml_branch_coverage=1 00:39:09.365 --rc genhtml_function_coverage=1 00:39:09.365 --rc genhtml_legend=1 00:39:09.365 --rc geninfo_all_blocks=1 00:39:09.365 --rc geninfo_unexecuted_blocks=1 00:39:09.365 00:39:09.365 ' 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:39:09.365 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:11.265 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:11.265 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:11.265 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:11.266 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:11.266 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:11.266 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:11.524 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:11.524 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:11.524 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:11.524 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:11.524 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:11.524 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:11.524 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:11.525 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:11.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:11.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:39:11.525 00:39:11.525 --- 10.0.0.2 ping statistics --- 00:39:11.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:11.525 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:39:11.525 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:11.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:11.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:39:11.525 00:39:11.525 --- 10.0.0.1 ping statistics --- 00:39:11.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:11.525 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:39:11.525 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:11.525 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:39:11.525 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:11.525 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:11.525 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:11.525 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:11.525 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:11.525 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:11.525 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:11.525 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:39:11.525 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:39:11.525 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:39:11.525 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:11.525 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:11.525 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:11.525 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3158730 00:39:11.525 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:39:11.525 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3158730 00:39:11.525 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3158730 ']' 00:39:11.525 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:11.525 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:11.525 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:11.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:11.525 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:11.525 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:11.525 [2024-11-18 12:07:37.387793] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:11.525 [2024-11-18 12:07:37.390189] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:39:11.525 [2024-11-18 12:07:37.390281] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:11.782 [2024-11-18 12:07:37.531420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:11.782 [2024-11-18 12:07:37.663098] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:11.782 [2024-11-18 12:07:37.663179] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:11.782 [2024-11-18 12:07:37.663209] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:11.782 [2024-11-18 12:07:37.663230] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:11.782 [2024-11-18 12:07:37.663252] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:11.782 [2024-11-18 12:07:37.666036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:11.782 [2024-11-18 12:07:37.666144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:11.782 [2024-11-18 12:07:37.666183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:11.782 [2024-11-18 12:07:37.666194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:39:12.348 [2024-11-18 12:07:38.033803] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:12.348 [2024-11-18 12:07:38.043866] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:12.348 [2024-11-18 12:07:38.044140] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:12.348 [2024-11-18 12:07:38.044977] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:12.348 [2024-11-18 12:07:38.045340] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:12.607 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:12.607 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:39:12.607 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:12.607 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:12.607 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:12.607 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:12.607 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:12.607 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.607 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:12.607 [2024-11-18 12:07:38.403330] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:12.607 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.607 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:39:12.607 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:12.607 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:12.607 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:39:12.607 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:39:12.607 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:39:12.607 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.607 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:12.865 Malloc0 00:39:12.865 [2024-11-18 12:07:38.535527] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:12.865 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.865 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:39:12.865 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:12.865 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:12.865 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3158900 00:39:12.865 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3158900 /var/tmp/bdevperf.sock 00:39:12.865 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3158900 ']' 00:39:12.865 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:12.865 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:39:12.865 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:39:12.865 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:12.865 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:39:12.865 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:12.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:12.865 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:39:12.865 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:12.865 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:12.865 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:12.865 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:12.865 { 00:39:12.865 "params": { 00:39:12.865 "name": "Nvme$subsystem", 00:39:12.865 "trtype": "$TEST_TRANSPORT", 00:39:12.865 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:12.865 "adrfam": "ipv4", 00:39:12.865 "trsvcid": "$NVMF_PORT", 00:39:12.865 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:12.865 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:12.865 "hdgst": ${hdgst:-false}, 00:39:12.865 "ddgst": ${ddgst:-false} 00:39:12.865 }, 00:39:12.865 "method": "bdev_nvme_attach_controller" 00:39:12.865 } 00:39:12.865 EOF 00:39:12.865 )") 00:39:12.865 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:39:12.865 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:39:12.865 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:39:12.865 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:12.865 "params": { 00:39:12.865 "name": "Nvme0", 00:39:12.865 "trtype": "tcp", 00:39:12.865 "traddr": "10.0.0.2", 00:39:12.865 "adrfam": "ipv4", 00:39:12.865 "trsvcid": "4420", 00:39:12.865 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:12.865 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:12.865 "hdgst": false, 00:39:12.865 "ddgst": false 00:39:12.865 }, 00:39:12.865 "method": "bdev_nvme_attach_controller" 00:39:12.865 }' 00:39:12.865 [2024-11-18 12:07:38.657483] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:39:12.865 [2024-11-18 12:07:38.657636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3158900 ] 00:39:13.122 [2024-11-18 12:07:38.797295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:13.122 [2024-11-18 12:07:38.925895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:13.686 Running I/O for 10 seconds... 00:39:13.945 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:13.945 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:39:13.945 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:39:13.945 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.945 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:13.945 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.945 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:13.945 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:39:13.945 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:39:13.945 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:39:13.945 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:39:13.945 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:39:13.945 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:39:13.945 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:39:13.945 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:39:13.945 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:39:13.945 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.945 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:13.945 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.945 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=387 00:39:13.945 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 387 -ge 100 ']' 00:39:13.945 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:39:13.945 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:39:13.945 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:39:13.945 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:39:13.945 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.945 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:13.945 [2024-11-18 12:07:39.679285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.945 [2024-11-18 12:07:39.679371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.945 [2024-11-18 12:07:39.679395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.945 [2024-11-18 12:07:39.679414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.945 [2024-11-18 12:07:39.679431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.945 [2024-11-18 12:07:39.679449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.945 [2024-11-18 12:07:39.679467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.945 [2024-11-18 12:07:39.679501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.945 [2024-11-18 12:07:39.679522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.945 [2024-11-18 12:07:39.679540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.945 [2024-11-18 12:07:39.679558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.945 [2024-11-18 12:07:39.679575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.945 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.945 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:39:13.945 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.945 [2024-11-18 12:07:39.683925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:39:13.945 [2024-11-18 12:07:39.684001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.945 [2024-11-18 12:07:39.684031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:39:13.945 [2024-11-18 12:07:39.684054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.945 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:13.945 [2024-11-18 12:07:39.684075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:39:13.945 [2024-11-18 12:07:39.684095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.945 [2024-11-18 12:07:39.684116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:39:13.945 [2024-11-18 12:07:39.684136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.945 [2024-11-18 12:07:39.684155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:39:13.945 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.945 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:39:13.945 [2024-11-18 12:07:39.693677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.945 [2024-11-18 12:07:39.693717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.945 [2024-11-18 12:07:39.693757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.945 [2024-11-18 12:07:39.693790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.945 [2024-11-18 12:07:39.693815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.693855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.693881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.693903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.693928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.693950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.693973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.693994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.694017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.694045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.694069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.694090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.694114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.694135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.694158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.694180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.694203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.694225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.694248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.694270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.694294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.694316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.694339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.694360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.694384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.694405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.694429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.694450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.694484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.694515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.694540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.694562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.694585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.694607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.694635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.694658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.694681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.694703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.694726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.694747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.694770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.694796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.694820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.694841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.694864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.694885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.694908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.694929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.694953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.694974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.694997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.695018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.695042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.695063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.695086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.695107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.695130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.695151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.695174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.695200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.695224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.695245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.695269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.695290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.695314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.695335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.695358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.695380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.695404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.695425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.695448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.695469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.695504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.695527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.695551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.946 [2024-11-18 12:07:39.695573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.946 [2024-11-18 12:07:39.695596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.947 [2024-11-18 12:07:39.695619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.947 [2024-11-18 12:07:39.695642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.947 [2024-11-18 12:07:39.695663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.947 [2024-11-18 12:07:39.695686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.947 [2024-11-18 12:07:39.695708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.947 [2024-11-18 12:07:39.695731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.947 [2024-11-18 12:07:39.695756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.947 [2024-11-18 12:07:39.695791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.947 [2024-11-18 12:07:39.695813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.947 [2024-11-18 12:07:39.695836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.947 [2024-11-18 12:07:39.695857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.947 [2024-11-18 12:07:39.695880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.947 [2024-11-18 12:07:39.695902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.947 [2024-11-18 12:07:39.695925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.947 [2024-11-18 12:07:39.695946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.947 [2024-11-18 12:07:39.695969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.947 [2024-11-18 12:07:39.695990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.947 [2024-11-18 12:07:39.696013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.947 [2024-11-18 12:07:39.696034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.947 [2024-11-18 12:07:39.696057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.947 [2024-11-18 12:07:39.696079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.947 [2024-11-18 12:07:39.696103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.947 [2024-11-18 12:07:39.696124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.947 [2024-11-18 12:07:39.696147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.947 [2024-11-18 12:07:39.696168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.947 [2024-11-18 12:07:39.696192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.947 [2024-11-18 12:07:39.696213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.947 [2024-11-18 12:07:39.696236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.947 [2024-11-18 12:07:39.696257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.947 [2024-11-18 12:07:39.696280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.947 [2024-11-18 12:07:39.696302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.947 [2024-11-18 12:07:39.696325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.947 [2024-11-18 12:07:39.696351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.947 [2024-11-18 12:07:39.696375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.947 [2024-11-18 12:07:39.696397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.947 [2024-11-18 12:07:39.696419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.947 [2024-11-18 12:07:39.696441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.947 [2024-11-18 12:07:39.696464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.947 [2024-11-18 12:07:39.696503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.947 [2024-11-18 12:07:39.696529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.947 [2024-11-18 12:07:39.696551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.947 [2024-11-18 12:07:39.696574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.947 [2024-11-18 12:07:39.696596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.947 [2024-11-18 12:07:39.696619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.947 [2024-11-18 12:07:39.696640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.947 [2024-11-18 12:07:39.696663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.947 [2024-11-18 12:07:39.696684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.947 [2024-11-18 12:07:39.697028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:39:13.947 [2024-11-18 12:07:39.698223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:39:13.947 task offset: 57344 on job bdev=Nvme0n1 fails 00:39:13.947 00:39:13.947 Latency(us) 00:39:13.947 [2024-11-18T11:07:39.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:13.947 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:13.947 Job: Nvme0n1 ended in about 0.35 seconds with error 00:39:13.947 Verification LBA range: start 0x0 length 0x400 00:39:13.947 Nvme0n1 : 0.35 1293.54 80.85 184.79 0.00 41809.23 3907.89 41554.68 00:39:13.947 [2024-11-18T11:07:39.832Z] =================================================================================================================== 00:39:13.947 [2024-11-18T11:07:39.832Z] Total : 1293.54 80.85 184.79 0.00 41809.23 3907.89 41554.68 00:39:13.947 [2024-11-18 12:07:39.703008] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:13.947 [2024-11-18 12:07:39.750245] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:39:14.880 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3158900 00:39:14.880 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3158900) - No such process 00:39:14.880 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:39:14.880 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:39:14.880 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:39:14.880 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:39:14.880 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:39:14.880 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:39:14.880 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:14.880 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:14.880 { 00:39:14.880 "params": { 00:39:14.880 "name": "Nvme$subsystem", 00:39:14.880 "trtype": "$TEST_TRANSPORT", 00:39:14.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:14.880 "adrfam": "ipv4", 00:39:14.880 "trsvcid": "$NVMF_PORT", 00:39:14.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:14.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:14.880 "hdgst": ${hdgst:-false}, 00:39:14.880 "ddgst": ${ddgst:-false} 00:39:14.880 }, 00:39:14.880 "method": "bdev_nvme_attach_controller" 00:39:14.880 } 00:39:14.880 EOF 00:39:14.880 )") 00:39:14.880 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:39:14.880 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:39:14.880 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:39:14.880 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:14.880 "params": { 00:39:14.880 "name": "Nvme0", 00:39:14.880 "trtype": "tcp", 00:39:14.880 "traddr": "10.0.0.2", 00:39:14.880 "adrfam": "ipv4", 00:39:14.880 "trsvcid": "4420", 00:39:14.880 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:14.880 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:14.880 "hdgst": false, 00:39:14.880 "ddgst": false 00:39:14.880 }, 00:39:14.880 "method": "bdev_nvme_attach_controller" 00:39:14.880 }' 00:39:15.138 [2024-11-18 12:07:40.777758] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:39:15.138 [2024-11-18 12:07:40.777902] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3159177 ] 00:39:15.138 [2024-11-18 12:07:40.912531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:15.396 [2024-11-18 12:07:41.042981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:15.961 Running I/O for 1 seconds... 00:39:16.896 1280.00 IOPS, 80.00 MiB/s 00:39:16.896 Latency(us) 00:39:16.896 [2024-11-18T11:07:42.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:16.896 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:16.896 Verification LBA range: start 0x0 length 0x400 00:39:16.896 Nvme0n1 : 1.03 1306.87 81.68 0.00 0.00 48167.50 7767.23 43884.85 00:39:16.896 [2024-11-18T11:07:42.781Z] =================================================================================================================== 00:39:16.896 [2024-11-18T11:07:42.781Z] Total : 1306.87 81.68 0.00 0.00 48167.50 7767.23 43884.85 00:39:17.830 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:39:17.830 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:39:17.830 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:39:17.830 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:39:17.830 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:39:17.830 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:17.830 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:39:17.830 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:17.830 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:39:17.830 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:17.830 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:17.830 rmmod nvme_tcp 00:39:17.830 rmmod nvme_fabrics 00:39:17.830 rmmod nvme_keyring 00:39:17.830 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:17.830 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:39:17.830 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:39:17.830 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3158730 ']' 00:39:17.830 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3158730 00:39:17.830 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3158730 ']' 00:39:17.830 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3158730 00:39:17.830 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:39:17.830 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:17.830 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3158730 00:39:17.830 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:17.830 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:17.830 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3158730' 00:39:17.830 killing process with pid 3158730 00:39:17.830 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3158730 00:39:17.830 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3158730 00:39:19.205 [2024-11-18 12:07:44.796230] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:39:19.205 12:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:19.205 12:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:19.205 12:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:19.205 12:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:39:19.205 12:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:39:19.205 12:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:19.205 12:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:39:19.205 12:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:19.205 12:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:19.205 12:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:19.205 12:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:19.205 12:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:21.109 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:21.109 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:39:21.109 00:39:21.109 real 0m11.932s 00:39:21.109 user 0m25.847s 00:39:21.109 sys 0m4.542s 00:39:21.109 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:21.109 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:21.109 ************************************ 00:39:21.109 END TEST nvmf_host_management 00:39:21.109 ************************************ 00:39:21.109 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:39:21.109 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:21.109 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:21.109 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:21.368 ************************************ 00:39:21.368 START TEST nvmf_lvol 00:39:21.368 ************************************ 00:39:21.368 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:39:21.368 * Looking for test storage... 00:39:21.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:21.368 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:21.368 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:39:21.368 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:21.368 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:21.368 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:21.368 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:21.368 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:21.368 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:39:21.368 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:39:21.368 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:39:21.368 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:39:21.368 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:39:21.368 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:39:21.368 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:39:21.368 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:21.368 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:39:21.368 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:21.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:21.369 --rc genhtml_branch_coverage=1 00:39:21.369 --rc genhtml_function_coverage=1 00:39:21.369 --rc genhtml_legend=1 00:39:21.369 --rc geninfo_all_blocks=1 00:39:21.369 --rc geninfo_unexecuted_blocks=1 00:39:21.369 00:39:21.369 ' 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:21.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:21.369 --rc genhtml_branch_coverage=1 00:39:21.369 --rc genhtml_function_coverage=1 00:39:21.369 --rc genhtml_legend=1 00:39:21.369 --rc geninfo_all_blocks=1 00:39:21.369 --rc geninfo_unexecuted_blocks=1 00:39:21.369 00:39:21.369 ' 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:21.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:21.369 --rc genhtml_branch_coverage=1 00:39:21.369 --rc genhtml_function_coverage=1 00:39:21.369 --rc genhtml_legend=1 00:39:21.369 --rc geninfo_all_blocks=1 00:39:21.369 --rc geninfo_unexecuted_blocks=1 00:39:21.369 00:39:21.369 ' 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:21.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:21.369 --rc genhtml_branch_coverage=1 00:39:21.369 --rc genhtml_function_coverage=1 00:39:21.369 --rc genhtml_legend=1 00:39:21.369 --rc geninfo_all_blocks=1 00:39:21.369 --rc geninfo_unexecuted_blocks=1 00:39:21.369 00:39:21.369 ' 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:21.369 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:21.370 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:21.370 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:21.370 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:39:21.370 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:23.272 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:23.272 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:23.272 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:23.272 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:23.272 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:23.542 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:23.542 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:23.542 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:23.542 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:23.542 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:23.542 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:23.542 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:23.542 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:23.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:23.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:39:23.542 00:39:23.542 --- 10.0.0.2 ping statistics --- 00:39:23.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:23.542 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:39:23.542 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:23.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:23.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:39:23.542 00:39:23.542 --- 10.0.0.1 ping statistics --- 00:39:23.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:23.542 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:39:23.542 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:23.542 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:39:23.542 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:23.542 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:23.542 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:23.542 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:23.542 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:23.542 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:23.542 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:23.542 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:39:23.542 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:23.542 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:23.542 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:23.542 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3161519 00:39:23.542 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3161519 00:39:23.542 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:39:23.542 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3161519 ']' 00:39:23.542 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:23.542 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:23.542 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:23.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:23.542 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:23.542 12:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:23.542 [2024-11-18 12:07:49.402373] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:23.542 [2024-11-18 12:07:49.405185] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:39:23.542 [2024-11-18 12:07:49.405279] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:23.799 [2024-11-18 12:07:49.559481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:24.057 [2024-11-18 12:07:49.703930] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:24.057 [2024-11-18 12:07:49.703995] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:24.057 [2024-11-18 12:07:49.704021] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:24.057 [2024-11-18 12:07:49.704043] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:24.057 [2024-11-18 12:07:49.704063] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:24.057 [2024-11-18 12:07:49.706577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:24.057 [2024-11-18 12:07:49.706621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:24.057 [2024-11-18 12:07:49.706629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:24.314 [2024-11-18 12:07:50.079087] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:24.315 [2024-11-18 12:07:50.080197] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:24.315 [2024-11-18 12:07:50.081021] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:24.315 [2024-11-18 12:07:50.081360] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:24.572 12:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:24.572 12:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:39:24.572 12:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:24.572 12:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:24.572 12:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:24.572 12:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:24.572 12:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:24.832 [2024-11-18 12:07:50.643762] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:24.832 12:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:25.398 12:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:39:25.398 12:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:25.656 12:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:39:25.656 12:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:39:25.914 12:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:39:26.172 12:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=113fbc31-58af-46fb-ae96-03c716e62498 00:39:26.172 12:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 113fbc31-58af-46fb-ae96-03c716e62498 lvol 20 00:39:26.430 12:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=71e04374-e364-45d1-a998-626bbcacf253 00:39:26.430 12:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:26.699 12:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 71e04374-e364-45d1-a998-626bbcacf253 00:39:26.956 12:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:27.213 [2024-11-18 12:07:53.051977] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:27.213 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:27.471 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3162071 00:39:27.471 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:39:27.471 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:39:28.845 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 71e04374-e364-45d1-a998-626bbcacf253 MY_SNAPSHOT 00:39:28.845 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=da6d9d39-17d0-4d70-aa93-0d15d9431124 00:39:28.845 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 71e04374-e364-45d1-a998-626bbcacf253 30 00:39:29.103 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone da6d9d39-17d0-4d70-aa93-0d15d9431124 MY_CLONE 00:39:29.669 12:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d8e3d38d-94d4-44cd-9788-6a402a25c9ab 00:39:29.669 12:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d8e3d38d-94d4-44cd-9788-6a402a25c9ab 00:39:30.234 12:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3162071 00:39:38.343 Initializing NVMe Controllers 00:39:38.343 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:39:38.343 Controller IO queue size 128, less than required. 00:39:38.343 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:38.343 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:39:38.343 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:39:38.343 Initialization complete. Launching workers. 00:39:38.343 ======================================================== 00:39:38.343 Latency(us) 00:39:38.343 Device Information : IOPS MiB/s Average min max 00:39:38.343 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8282.30 32.35 15458.94 480.16 179351.07 00:39:38.343 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8047.00 31.43 15922.77 3353.20 211050.18 00:39:38.343 ======================================================== 00:39:38.343 Total : 16329.30 63.79 15687.51 480.16 211050.18 00:39:38.343 00:39:38.343 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:38.343 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 71e04374-e364-45d1-a998-626bbcacf253 00:39:38.601 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 113fbc31-58af-46fb-ae96-03c716e62498 00:39:39.254 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:39:39.254 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:39:39.254 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:39:39.254 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:39.254 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:39:39.254 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:39.254 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:39:39.254 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:39.254 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:39.254 rmmod nvme_tcp 00:39:39.254 rmmod nvme_fabrics 00:39:39.254 rmmod nvme_keyring 00:39:39.254 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:39.254 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:39:39.254 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:39:39.254 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3161519 ']' 00:39:39.254 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3161519 00:39:39.254 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3161519 ']' 00:39:39.254 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3161519 00:39:39.254 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:39:39.254 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:39.254 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3161519 00:39:39.254 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:39.254 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:39.254 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3161519' 00:39:39.254 killing process with pid 3161519 00:39:39.254 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3161519 00:39:39.254 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3161519 00:39:40.645 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:40.645 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:40.645 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:40.645 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:39:40.645 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:39:40.645 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:40.645 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:39:40.645 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:40.645 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:40.645 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:40.645 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:40.645 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:42.546 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:42.546 00:39:42.546 real 0m21.420s 00:39:42.546 user 0m58.674s 00:39:42.546 sys 0m7.685s 00:39:42.546 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:42.546 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:42.546 ************************************ 00:39:42.546 END TEST nvmf_lvol 00:39:42.546 ************************************ 00:39:42.805 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:39:42.805 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:42.805 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:42.805 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:42.805 ************************************ 00:39:42.805 START TEST nvmf_lvs_grow 00:39:42.805 ************************************ 00:39:42.805 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:39:42.805 * Looking for test storage... 00:39:42.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:42.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:42.806 --rc genhtml_branch_coverage=1 00:39:42.806 --rc genhtml_function_coverage=1 00:39:42.806 --rc genhtml_legend=1 00:39:42.806 --rc geninfo_all_blocks=1 00:39:42.806 --rc geninfo_unexecuted_blocks=1 00:39:42.806 00:39:42.806 ' 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:42.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:42.806 --rc genhtml_branch_coverage=1 00:39:42.806 --rc genhtml_function_coverage=1 00:39:42.806 --rc genhtml_legend=1 00:39:42.806 --rc geninfo_all_blocks=1 00:39:42.806 --rc geninfo_unexecuted_blocks=1 00:39:42.806 00:39:42.806 ' 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:42.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:42.806 --rc genhtml_branch_coverage=1 00:39:42.806 --rc genhtml_function_coverage=1 00:39:42.806 --rc genhtml_legend=1 00:39:42.806 --rc geninfo_all_blocks=1 00:39:42.806 --rc geninfo_unexecuted_blocks=1 00:39:42.806 00:39:42.806 ' 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:42.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:42.806 --rc genhtml_branch_coverage=1 00:39:42.806 --rc genhtml_function_coverage=1 00:39:42.806 --rc genhtml_legend=1 00:39:42.806 --rc geninfo_all_blocks=1 00:39:42.806 --rc geninfo_unexecuted_blocks=1 00:39:42.806 00:39:42.806 ' 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:42.806 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:42.807 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:42.807 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:42.807 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:39:42.807 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:42.807 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:39:42.807 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:42.807 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:42.807 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:42.807 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:42.807 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:42.807 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:42.807 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:42.807 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:42.807 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:42.807 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:42.807 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:42.807 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:42.807 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:39:42.807 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:42.807 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:42.807 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:42.807 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:42.807 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:42.807 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:42.807 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:42.807 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:42.807 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:42.807 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:42.807 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:39:42.807 12:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:44.708 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:44.708 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:44.708 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:44.708 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:44.708 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:44.709 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:44.709 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:44.709 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:39:44.709 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:44.709 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:44.709 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:44.709 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:44.709 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:44.709 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:44.709 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:44.709 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:44.709 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:44.709 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:44.709 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:44.709 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:44.709 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:44.709 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:44.709 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:44.709 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:44.709 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:44.709 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:44.968 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:44.968 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:44.968 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:44.968 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:44.968 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:44.968 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:44.968 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:44.968 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:44.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:44.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:39:44.968 00:39:44.968 --- 10.0.0.2 ping statistics --- 00:39:44.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:44.968 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:39:44.968 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:44.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:44.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:39:44.968 00:39:44.968 --- 10.0.0.1 ping statistics --- 00:39:44.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:44.968 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:39:44.968 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:44.968 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:39:44.968 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:44.968 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:44.968 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:44.968 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:44.968 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:44.968 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:44.968 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:44.968 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:39:44.968 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:44.968 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:44.968 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:44.968 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3165457 00:39:44.968 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:39:44.968 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3165457 00:39:44.968 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3165457 ']' 00:39:44.968 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:44.968 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:44.968 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:44.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:44.968 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:44.968 12:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:44.968 [2024-11-18 12:08:10.801121] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:44.968 [2024-11-18 12:08:10.803667] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:39:44.968 [2024-11-18 12:08:10.803759] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:45.226 [2024-11-18 12:08:10.956804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:45.226 [2024-11-18 12:08:11.077294] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:45.226 [2024-11-18 12:08:11.077377] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:45.226 [2024-11-18 12:08:11.077418] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:45.226 [2024-11-18 12:08:11.077436] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:45.226 [2024-11-18 12:08:11.077455] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:45.226 [2024-11-18 12:08:11.078926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:45.791 [2024-11-18 12:08:11.404128] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:45.791 [2024-11-18 12:08:11.404565] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:46.048 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:46.048 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:39:46.048 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:46.048 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:46.048 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:46.048 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:46.048 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:46.306 [2024-11-18 12:08:12.023989] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:46.306 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:39:46.306 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:46.306 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:46.306 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:46.306 ************************************ 00:39:46.306 START TEST lvs_grow_clean 00:39:46.306 ************************************ 00:39:46.306 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:39:46.306 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:39:46.306 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:39:46.306 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:39:46.306 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:39:46.306 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:39:46.306 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:39:46.306 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:46.306 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:46.306 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:46.564 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:39:46.564 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:39:46.822 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=4d0e6c4b-a502-4e84-9521-274ad2b88dd7 00:39:46.822 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d0e6c4b-a502-4e84-9521-274ad2b88dd7 00:39:46.822 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:39:47.080 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:39:47.080 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:39:47.080 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4d0e6c4b-a502-4e84-9521-274ad2b88dd7 lvol 150 00:39:47.338 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=1e2251ca-4399-491b-b510-ba5456957895 00:39:47.338 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:47.338 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:39:47.596 [2024-11-18 12:08:13.447792] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:39:47.596 [2024-11-18 12:08:13.447963] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:39:47.596 true 00:39:47.596 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:39:47.596 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d0e6c4b-a502-4e84-9521-274ad2b88dd7 00:39:47.854 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:39:47.854 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:48.420 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1e2251ca-4399-491b-b510-ba5456957895 00:39:48.420 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:48.678 [2024-11-18 12:08:14.536186] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:48.678 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:49.244 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3166015 00:39:49.244 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:39:49.244 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:49.244 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3166015 /var/tmp/bdevperf.sock 00:39:49.244 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3166015 ']' 00:39:49.244 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:49.244 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:49.244 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:49.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:49.244 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:49.244 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:39:49.244 [2024-11-18 12:08:14.908755] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:39:49.244 [2024-11-18 12:08:14.908914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3166015 ] 00:39:49.244 [2024-11-18 12:08:15.048419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:49.502 [2024-11-18 12:08:15.180159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:50.068 12:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:50.068 12:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:39:50.068 12:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:39:50.634 Nvme0n1 00:39:50.634 12:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:39:50.892 [ 00:39:50.892 { 00:39:50.892 "name": "Nvme0n1", 00:39:50.892 "aliases": [ 00:39:50.892 "1e2251ca-4399-491b-b510-ba5456957895" 00:39:50.892 ], 00:39:50.892 "product_name": "NVMe disk", 00:39:50.892 "block_size": 4096, 00:39:50.892 "num_blocks": 38912, 00:39:50.892 "uuid": "1e2251ca-4399-491b-b510-ba5456957895", 00:39:50.892 "numa_id": 0, 00:39:50.892 "assigned_rate_limits": { 00:39:50.892 "rw_ios_per_sec": 0, 00:39:50.892 "rw_mbytes_per_sec": 0, 00:39:50.892 "r_mbytes_per_sec": 0, 00:39:50.892 "w_mbytes_per_sec": 0 00:39:50.892 }, 00:39:50.892 "claimed": false, 00:39:50.892 "zoned": false, 00:39:50.892 "supported_io_types": { 00:39:50.892 "read": true, 00:39:50.892 "write": true, 00:39:50.892 "unmap": true, 00:39:50.892 "flush": true, 00:39:50.892 "reset": true, 00:39:50.892 "nvme_admin": true, 00:39:50.892 "nvme_io": true, 00:39:50.892 "nvme_io_md": false, 00:39:50.892 "write_zeroes": true, 00:39:50.892 "zcopy": false, 00:39:50.892 "get_zone_info": false, 00:39:50.892 "zone_management": false, 00:39:50.892 "zone_append": false, 00:39:50.892 "compare": true, 00:39:50.892 "compare_and_write": true, 00:39:50.892 "abort": true, 00:39:50.892 "seek_hole": false, 00:39:50.892 "seek_data": false, 00:39:50.892 "copy": true, 00:39:50.892 "nvme_iov_md": false 00:39:50.892 }, 00:39:50.892 "memory_domains": [ 00:39:50.892 { 00:39:50.892 "dma_device_id": "system", 00:39:50.892 "dma_device_type": 1 00:39:50.892 } 00:39:50.892 ], 00:39:50.892 "driver_specific": { 00:39:50.892 "nvme": [ 00:39:50.892 { 00:39:50.892 "trid": { 00:39:50.892 "trtype": "TCP", 00:39:50.892 "adrfam": "IPv4", 00:39:50.892 "traddr": "10.0.0.2", 00:39:50.892 "trsvcid": "4420", 00:39:50.892 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:39:50.892 }, 00:39:50.892 "ctrlr_data": { 00:39:50.892 "cntlid": 1, 00:39:50.892 "vendor_id": "0x8086", 00:39:50.892 "model_number": "SPDK bdev Controller", 00:39:50.892 "serial_number": "SPDK0", 00:39:50.892 "firmware_revision": "25.01", 00:39:50.892 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:50.892 "oacs": { 00:39:50.892 "security": 0, 00:39:50.892 "format": 0, 00:39:50.892 "firmware": 0, 00:39:50.892 "ns_manage": 0 00:39:50.892 }, 00:39:50.892 "multi_ctrlr": true, 00:39:50.892 "ana_reporting": false 00:39:50.892 }, 00:39:50.892 "vs": { 00:39:50.892 "nvme_version": "1.3" 00:39:50.892 }, 00:39:50.892 "ns_data": { 00:39:50.892 "id": 1, 00:39:50.892 "can_share": true 00:39:50.892 } 00:39:50.892 } 00:39:50.892 ], 00:39:50.892 "mp_policy": "active_passive" 00:39:50.892 } 00:39:50.892 } 00:39:50.892 ] 00:39:50.892 12:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3166165 00:39:50.892 12:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:50.892 12:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:39:50.892 Running I/O for 10 seconds... 00:39:52.265 Latency(us) 00:39:52.265 [2024-11-18T11:08:18.150Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:52.265 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:52.265 Nvme0n1 : 1.00 10414.00 40.68 0.00 0.00 0.00 0.00 0.00 00:39:52.265 [2024-11-18T11:08:18.150Z] =================================================================================================================== 00:39:52.265 [2024-11-18T11:08:18.150Z] Total : 10414.00 40.68 0.00 0.00 0.00 0.00 0.00 00:39:52.265 00:39:52.831 12:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4d0e6c4b-a502-4e84-9521-274ad2b88dd7 00:39:53.089 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:53.089 Nvme0n1 : 2.00 10541.00 41.18 0.00 0.00 0.00 0.00 0.00 00:39:53.089 [2024-11-18T11:08:18.974Z] =================================================================================================================== 00:39:53.089 [2024-11-18T11:08:18.974Z] Total : 10541.00 41.18 0.00 0.00 0.00 0.00 0.00 00:39:53.089 00:39:53.089 true 00:39:53.089 12:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:39:53.089 12:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d0e6c4b-a502-4e84-9521-274ad2b88dd7 00:39:53.347 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:39:53.347 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:39:53.347 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3166165 00:39:53.912 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:53.912 Nvme0n1 : 3.00 10625.67 41.51 0.00 0.00 0.00 0.00 0.00 00:39:53.912 [2024-11-18T11:08:19.797Z] =================================================================================================================== 00:39:53.912 [2024-11-18T11:08:19.797Z] Total : 10625.67 41.51 0.00 0.00 0.00 0.00 0.00 00:39:53.912 00:39:55.286 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:55.286 Nvme0n1 : 4.00 10636.25 41.55 0.00 0.00 0.00 0.00 0.00 00:39:55.286 [2024-11-18T11:08:21.171Z] =================================================================================================================== 00:39:55.286 [2024-11-18T11:08:21.171Z] Total : 10636.25 41.55 0.00 0.00 0.00 0.00 0.00 00:39:55.286 00:39:56.220 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:56.220 Nvme0n1 : 5.00 10668.00 41.67 0.00 0.00 0.00 0.00 0.00 00:39:56.220 [2024-11-18T11:08:22.105Z] =================================================================================================================== 00:39:56.220 [2024-11-18T11:08:22.105Z] Total : 10668.00 41.67 0.00 0.00 0.00 0.00 0.00 00:39:56.220 00:39:57.155 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:57.155 Nvme0n1 : 6.00 10710.33 41.84 0.00 0.00 0.00 0.00 0.00 00:39:57.155 [2024-11-18T11:08:23.040Z] =================================================================================================================== 00:39:57.155 [2024-11-18T11:08:23.040Z] Total : 10710.33 41.84 0.00 0.00 0.00 0.00 0.00 00:39:57.155 00:39:58.089 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:58.089 Nvme0n1 : 7.00 10722.43 41.88 0.00 0.00 0.00 0.00 0.00 00:39:58.089 [2024-11-18T11:08:23.974Z] =================================================================================================================== 00:39:58.089 [2024-11-18T11:08:23.974Z] Total : 10722.43 41.88 0.00 0.00 0.00 0.00 0.00 00:39:58.089 00:39:59.023 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:59.023 Nvme0n1 : 8.00 10747.38 41.98 0.00 0.00 0.00 0.00 0.00 00:39:59.023 [2024-11-18T11:08:24.908Z] =================================================================================================================== 00:39:59.023 [2024-11-18T11:08:24.908Z] Total : 10747.38 41.98 0.00 0.00 0.00 0.00 0.00 00:39:59.023 00:39:59.957 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:59.957 Nvme0n1 : 9.00 10837.33 42.33 0.00 0.00 0.00 0.00 0.00 00:39:59.957 [2024-11-18T11:08:25.842Z] =================================================================================================================== 00:39:59.957 [2024-11-18T11:08:25.842Z] Total : 10837.33 42.33 0.00 0.00 0.00 0.00 0.00 00:39:59.957 00:40:00.891 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:00.891 Nvme0n1 : 10.00 10883.90 42.52 0.00 0.00 0.00 0.00 0.00 00:40:00.891 [2024-11-18T11:08:26.776Z] =================================================================================================================== 00:40:00.891 [2024-11-18T11:08:26.776Z] Total : 10883.90 42.52 0.00 0.00 0.00 0.00 0.00 00:40:00.891 00:40:00.891 00:40:00.891 Latency(us) 00:40:00.891 [2024-11-18T11:08:26.776Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:00.891 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:00.891 Nvme0n1 : 10.01 10885.81 42.52 0.00 0.00 11751.65 10145.94 27185.30 00:40:00.891 [2024-11-18T11:08:26.776Z] =================================================================================================================== 00:40:00.891 [2024-11-18T11:08:26.776Z] Total : 10885.81 42.52 0.00 0.00 11751.65 10145.94 27185.30 00:40:00.891 { 00:40:00.891 "results": [ 00:40:00.891 { 00:40:00.891 "job": "Nvme0n1", 00:40:00.891 "core_mask": "0x2", 00:40:00.891 "workload": "randwrite", 00:40:00.891 "status": "finished", 00:40:00.891 "queue_depth": 128, 00:40:00.891 "io_size": 4096, 00:40:00.891 "runtime": 10.010005, 00:40:00.891 "iops": 10885.80874834728, 00:40:00.891 "mibps": 42.52269042323156, 00:40:00.891 "io_failed": 0, 00:40:00.891 "io_timeout": 0, 00:40:00.891 "avg_latency_us": 11751.648086892768, 00:40:00.891 "min_latency_us": 10145.943703703704, 00:40:00.891 "max_latency_us": 27185.303703703703 00:40:00.891 } 00:40:00.891 ], 00:40:00.891 "core_count": 1 00:40:00.891 } 00:40:00.891 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3166015 00:40:00.891 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3166015 ']' 00:40:00.891 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3166015 00:40:01.149 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:40:01.149 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:01.150 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3166015 00:40:01.150 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:01.150 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:01.150 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3166015' 00:40:01.150 killing process with pid 3166015 00:40:01.150 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3166015 00:40:01.150 Received shutdown signal, test time was about 10.000000 seconds 00:40:01.150 00:40:01.150 Latency(us) 00:40:01.150 [2024-11-18T11:08:27.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:01.150 [2024-11-18T11:08:27.035Z] =================================================================================================================== 00:40:01.150 [2024-11-18T11:08:27.035Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:01.150 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3166015 00:40:02.084 12:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:02.343 12:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:02.601 12:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d0e6c4b-a502-4e84-9521-274ad2b88dd7 00:40:02.601 12:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:40:02.859 12:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:40:02.859 12:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:40:02.859 12:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:03.118 [2024-11-18 12:08:28.779943] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:40:03.118 12:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d0e6c4b-a502-4e84-9521-274ad2b88dd7 00:40:03.118 12:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:40:03.118 12:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d0e6c4b-a502-4e84-9521-274ad2b88dd7 00:40:03.118 12:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:03.118 12:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:03.118 12:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:03.118 12:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:03.118 12:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:03.118 12:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:03.118 12:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:03.118 12:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:40:03.118 12:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d0e6c4b-a502-4e84-9521-274ad2b88dd7 00:40:03.376 request: 00:40:03.376 { 00:40:03.376 "uuid": "4d0e6c4b-a502-4e84-9521-274ad2b88dd7", 00:40:03.376 "method": "bdev_lvol_get_lvstores", 00:40:03.376 "req_id": 1 00:40:03.376 } 00:40:03.376 Got JSON-RPC error response 00:40:03.376 response: 00:40:03.376 { 00:40:03.376 "code": -19, 00:40:03.376 "message": "No such device" 00:40:03.376 } 00:40:03.376 12:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:40:03.376 12:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:03.376 12:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:03.376 12:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:03.376 12:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:03.633 aio_bdev 00:40:03.633 12:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1e2251ca-4399-491b-b510-ba5456957895 00:40:03.633 12:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=1e2251ca-4399-491b-b510-ba5456957895 00:40:03.633 12:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:03.633 12:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:40:03.633 12:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:03.633 12:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:03.633 12:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:03.891 12:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1e2251ca-4399-491b-b510-ba5456957895 -t 2000 00:40:04.149 [ 00:40:04.149 { 00:40:04.149 "name": "1e2251ca-4399-491b-b510-ba5456957895", 00:40:04.149 "aliases": [ 00:40:04.149 "lvs/lvol" 00:40:04.149 ], 00:40:04.149 "product_name": "Logical Volume", 00:40:04.149 "block_size": 4096, 00:40:04.149 "num_blocks": 38912, 00:40:04.149 "uuid": "1e2251ca-4399-491b-b510-ba5456957895", 00:40:04.149 "assigned_rate_limits": { 00:40:04.149 "rw_ios_per_sec": 0, 00:40:04.149 "rw_mbytes_per_sec": 0, 00:40:04.149 "r_mbytes_per_sec": 0, 00:40:04.149 "w_mbytes_per_sec": 0 00:40:04.149 }, 00:40:04.149 "claimed": false, 00:40:04.149 "zoned": false, 00:40:04.149 "supported_io_types": { 00:40:04.149 "read": true, 00:40:04.149 "write": true, 00:40:04.149 "unmap": true, 00:40:04.149 "flush": false, 00:40:04.149 "reset": true, 00:40:04.149 "nvme_admin": false, 00:40:04.149 "nvme_io": false, 00:40:04.149 "nvme_io_md": false, 00:40:04.149 "write_zeroes": true, 00:40:04.149 "zcopy": false, 00:40:04.149 "get_zone_info": false, 00:40:04.149 "zone_management": false, 00:40:04.149 "zone_append": false, 00:40:04.149 "compare": false, 00:40:04.149 "compare_and_write": false, 00:40:04.149 "abort": false, 00:40:04.149 "seek_hole": true, 00:40:04.149 "seek_data": true, 00:40:04.149 "copy": false, 00:40:04.149 "nvme_iov_md": false 00:40:04.149 }, 00:40:04.149 "driver_specific": { 00:40:04.149 "lvol": { 00:40:04.149 "lvol_store_uuid": "4d0e6c4b-a502-4e84-9521-274ad2b88dd7", 00:40:04.149 "base_bdev": "aio_bdev", 00:40:04.149 "thin_provision": false, 00:40:04.149 "num_allocated_clusters": 38, 00:40:04.149 "snapshot": false, 00:40:04.149 "clone": false, 00:40:04.149 "esnap_clone": false 00:40:04.149 } 00:40:04.149 } 00:40:04.149 } 00:40:04.149 ] 00:40:04.149 12:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:40:04.149 12:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d0e6c4b-a502-4e84-9521-274ad2b88dd7 00:40:04.149 12:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:40:04.407 12:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:40:04.408 12:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d0e6c4b-a502-4e84-9521-274ad2b88dd7 00:40:04.408 12:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:40:04.666 12:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:40:04.666 12:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1e2251ca-4399-491b-b510-ba5456957895 00:40:04.986 12:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4d0e6c4b-a502-4e84-9521-274ad2b88dd7 00:40:05.244 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:05.503 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:05.503 00:40:05.503 real 0m19.317s 00:40:05.503 user 0m19.228s 00:40:05.503 sys 0m1.851s 00:40:05.503 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:05.503 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:40:05.503 ************************************ 00:40:05.503 END TEST lvs_grow_clean 00:40:05.503 ************************************ 00:40:05.761 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:40:05.761 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:05.761 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:05.762 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:05.762 ************************************ 00:40:05.762 START TEST lvs_grow_dirty 00:40:05.762 ************************************ 00:40:05.762 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:40:05.762 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:40:05.762 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:40:05.762 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:40:05.762 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:40:05.762 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:40:05.762 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:40:05.762 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:05.762 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:05.762 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:06.021 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:40:06.021 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:40:06.279 12:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a0566b74-6fe5-4fcc-8077-9436f96236ea 00:40:06.279 12:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0566b74-6fe5-4fcc-8077-9436f96236ea 00:40:06.279 12:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:40:06.537 12:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:40:06.537 12:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:40:06.537 12:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a0566b74-6fe5-4fcc-8077-9436f96236ea lvol 150 00:40:06.795 12:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=fd9dfd98-5018-4ba5-b27f-6399ca095372 00:40:06.795 12:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:06.795 12:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:40:07.053 [2024-11-18 12:08:32.895792] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:40:07.053 [2024-11-18 12:08:32.895965] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:40:07.053 true 00:40:07.053 12:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:40:07.053 12:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0566b74-6fe5-4fcc-8077-9436f96236ea 00:40:07.311 12:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:40:07.311 12:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:07.878 12:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fd9dfd98-5018-4ba5-b27f-6399ca095372 00:40:07.878 12:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:08.136 [2024-11-18 12:08:33.988201] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:08.136 12:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:08.394 12:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3168306 00:40:08.394 12:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:40:08.394 12:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:08.394 12:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3168306 /var/tmp/bdevperf.sock 00:40:08.394 12:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3168306 ']' 00:40:08.394 12:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:08.394 12:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:08.394 12:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:08.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:08.394 12:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:08.394 12:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:08.652 [2024-11-18 12:08:34.355536] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:40:08.652 [2024-11-18 12:08:34.355670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3168306 ] 00:40:08.652 [2024-11-18 12:08:34.496369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:08.910 [2024-11-18 12:08:34.625622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:09.843 12:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:09.843 12:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:40:09.843 12:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:40:09.843 Nvme0n1 00:40:09.843 12:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:40:10.101 [ 00:40:10.101 { 00:40:10.101 "name": "Nvme0n1", 00:40:10.101 "aliases": [ 00:40:10.101 "fd9dfd98-5018-4ba5-b27f-6399ca095372" 00:40:10.101 ], 00:40:10.101 "product_name": "NVMe disk", 00:40:10.101 "block_size": 4096, 00:40:10.101 "num_blocks": 38912, 00:40:10.101 "uuid": "fd9dfd98-5018-4ba5-b27f-6399ca095372", 00:40:10.101 "numa_id": 0, 00:40:10.101 "assigned_rate_limits": { 00:40:10.101 "rw_ios_per_sec": 0, 00:40:10.101 "rw_mbytes_per_sec": 0, 00:40:10.101 "r_mbytes_per_sec": 0, 00:40:10.101 "w_mbytes_per_sec": 0 00:40:10.101 }, 00:40:10.101 "claimed": false, 00:40:10.101 "zoned": false, 00:40:10.101 "supported_io_types": { 00:40:10.101 "read": true, 00:40:10.101 "write": true, 00:40:10.101 "unmap": true, 00:40:10.101 "flush": true, 00:40:10.101 "reset": true, 00:40:10.101 "nvme_admin": true, 00:40:10.101 "nvme_io": true, 00:40:10.101 "nvme_io_md": false, 00:40:10.101 "write_zeroes": true, 00:40:10.101 "zcopy": false, 00:40:10.101 "get_zone_info": false, 00:40:10.101 "zone_management": false, 00:40:10.101 "zone_append": false, 00:40:10.101 "compare": true, 00:40:10.101 "compare_and_write": true, 00:40:10.101 "abort": true, 00:40:10.101 "seek_hole": false, 00:40:10.101 "seek_data": false, 00:40:10.101 "copy": true, 00:40:10.101 "nvme_iov_md": false 00:40:10.101 }, 00:40:10.101 "memory_domains": [ 00:40:10.101 { 00:40:10.101 "dma_device_id": "system", 00:40:10.101 "dma_device_type": 1 00:40:10.101 } 00:40:10.101 ], 00:40:10.101 "driver_specific": { 00:40:10.101 "nvme": [ 00:40:10.101 { 00:40:10.101 "trid": { 00:40:10.101 "trtype": "TCP", 00:40:10.101 "adrfam": "IPv4", 00:40:10.101 "traddr": "10.0.0.2", 00:40:10.101 "trsvcid": "4420", 00:40:10.101 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:40:10.101 }, 00:40:10.101 "ctrlr_data": { 00:40:10.101 "cntlid": 1, 00:40:10.101 "vendor_id": "0x8086", 00:40:10.101 "model_number": "SPDK bdev Controller", 00:40:10.101 "serial_number": "SPDK0", 00:40:10.101 "firmware_revision": "25.01", 00:40:10.101 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:10.101 "oacs": { 00:40:10.101 "security": 0, 00:40:10.101 "format": 0, 00:40:10.101 "firmware": 0, 00:40:10.101 "ns_manage": 0 00:40:10.101 }, 00:40:10.101 "multi_ctrlr": true, 00:40:10.101 "ana_reporting": false 00:40:10.101 }, 00:40:10.101 "vs": { 00:40:10.101 "nvme_version": "1.3" 00:40:10.101 }, 00:40:10.101 "ns_data": { 00:40:10.101 "id": 1, 00:40:10.101 "can_share": true 00:40:10.101 } 00:40:10.101 } 00:40:10.101 ], 00:40:10.101 "mp_policy": "active_passive" 00:40:10.101 } 00:40:10.101 } 00:40:10.101 ] 00:40:10.101 12:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3168455 00:40:10.101 12:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:40:10.101 12:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:10.359 Running I/O for 10 seconds... 00:40:11.293 Latency(us) 00:40:11.293 [2024-11-18T11:08:37.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:11.293 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:11.294 Nvme0n1 : 1.00 10414.00 40.68 0.00 0.00 0.00 0.00 0.00 00:40:11.294 [2024-11-18T11:08:37.179Z] =================================================================================================================== 00:40:11.294 [2024-11-18T11:08:37.179Z] Total : 10414.00 40.68 0.00 0.00 0.00 0.00 0.00 00:40:11.294 00:40:12.227 12:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a0566b74-6fe5-4fcc-8077-9436f96236ea 00:40:12.485 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:12.485 Nvme0n1 : 2.00 10541.00 41.18 0.00 0.00 0.00 0.00 0.00 00:40:12.485 [2024-11-18T11:08:38.370Z] =================================================================================================================== 00:40:12.485 [2024-11-18T11:08:38.370Z] Total : 10541.00 41.18 0.00 0.00 0.00 0.00 0.00 00:40:12.485 00:40:12.485 true 00:40:12.485 12:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0566b74-6fe5-4fcc-8077-9436f96236ea 00:40:12.485 12:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:40:12.743 12:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:40:12.743 12:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:40:12.743 12:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3168455 00:40:13.380 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:13.380 Nvme0n1 : 3.00 10583.33 41.34 0.00 0.00 0.00 0.00 0.00 00:40:13.380 [2024-11-18T11:08:39.265Z] =================================================================================================================== 00:40:13.380 [2024-11-18T11:08:39.265Z] Total : 10583.33 41.34 0.00 0.00 0.00 0.00 0.00 00:40:13.380 00:40:14.317 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:14.317 Nvme0n1 : 4.00 10708.25 41.83 0.00 0.00 0.00 0.00 0.00 00:40:14.317 [2024-11-18T11:08:40.202Z] =================================================================================================================== 00:40:14.317 [2024-11-18T11:08:40.202Z] Total : 10708.25 41.83 0.00 0.00 0.00 0.00 0.00 00:40:14.317 00:40:15.251 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:15.251 Nvme0n1 : 5.00 10700.20 41.80 0.00 0.00 0.00 0.00 0.00 00:40:15.251 [2024-11-18T11:08:41.136Z] =================================================================================================================== 00:40:15.251 [2024-11-18T11:08:41.136Z] Total : 10700.20 41.80 0.00 0.00 0.00 0.00 0.00 00:40:15.251 00:40:16.635 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:16.635 Nvme0n1 : 6.00 10716.00 41.86 0.00 0.00 0.00 0.00 0.00 00:40:16.635 [2024-11-18T11:08:42.520Z] =================================================================================================================== 00:40:16.635 [2024-11-18T11:08:42.520Z] Total : 10716.00 41.86 0.00 0.00 0.00 0.00 0.00 00:40:16.635 00:40:17.570 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:17.570 Nvme0n1 : 7.00 10745.43 41.97 0.00 0.00 0.00 0.00 0.00 00:40:17.571 [2024-11-18T11:08:43.456Z] =================================================================================================================== 00:40:17.571 [2024-11-18T11:08:43.456Z] Total : 10745.43 41.97 0.00 0.00 0.00 0.00 0.00 00:40:17.571 00:40:18.505 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:18.505 Nvme0n1 : 8.00 10753.75 42.01 0.00 0.00 0.00 0.00 0.00 00:40:18.505 [2024-11-18T11:08:44.390Z] =================================================================================================================== 00:40:18.505 [2024-11-18T11:08:44.390Z] Total : 10753.75 42.01 0.00 0.00 0.00 0.00 0.00 00:40:18.505 00:40:19.438 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:19.438 Nvme0n1 : 9.00 10772.44 42.08 0.00 0.00 0.00 0.00 0.00 00:40:19.438 [2024-11-18T11:08:45.323Z] =================================================================================================================== 00:40:19.438 [2024-11-18T11:08:45.323Z] Total : 10772.44 42.08 0.00 0.00 0.00 0.00 0.00 00:40:19.438 00:40:20.369 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:20.369 Nvme0n1 : 10.00 10787.40 42.14 0.00 0.00 0.00 0.00 0.00 00:40:20.369 [2024-11-18T11:08:46.254Z] =================================================================================================================== 00:40:20.369 [2024-11-18T11:08:46.254Z] Total : 10787.40 42.14 0.00 0.00 0.00 0.00 0.00 00:40:20.369 00:40:20.369 00:40:20.369 Latency(us) 00:40:20.369 [2024-11-18T11:08:46.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:20.369 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:20.369 Nvme0n1 : 10.01 10790.72 42.15 0.00 0.00 11855.12 6140.97 27185.30 00:40:20.369 [2024-11-18T11:08:46.254Z] =================================================================================================================== 00:40:20.369 [2024-11-18T11:08:46.254Z] Total : 10790.72 42.15 0.00 0.00 11855.12 6140.97 27185.30 00:40:20.369 { 00:40:20.369 "results": [ 00:40:20.369 { 00:40:20.369 "job": "Nvme0n1", 00:40:20.369 "core_mask": "0x2", 00:40:20.369 "workload": "randwrite", 00:40:20.369 "status": "finished", 00:40:20.369 "queue_depth": 128, 00:40:20.369 "io_size": 4096, 00:40:20.369 "runtime": 10.008788, 00:40:20.369 "iops": 10790.71711779688, 00:40:20.369 "mibps": 42.15123874139406, 00:40:20.369 "io_failed": 0, 00:40:20.369 "io_timeout": 0, 00:40:20.369 "avg_latency_us": 11855.121273007975, 00:40:20.369 "min_latency_us": 6140.965925925926, 00:40:20.369 "max_latency_us": 27185.303703703703 00:40:20.369 } 00:40:20.369 ], 00:40:20.369 "core_count": 1 00:40:20.369 } 00:40:20.369 12:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3168306 00:40:20.369 12:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3168306 ']' 00:40:20.369 12:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3168306 00:40:20.369 12:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:40:20.369 12:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:20.369 12:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3168306 00:40:20.369 12:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:20.369 12:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:20.369 12:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3168306' 00:40:20.369 killing process with pid 3168306 00:40:20.369 12:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3168306 00:40:20.369 Received shutdown signal, test time was about 10.000000 seconds 00:40:20.369 00:40:20.369 Latency(us) 00:40:20.369 [2024-11-18T11:08:46.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:20.369 [2024-11-18T11:08:46.254Z] =================================================================================================================== 00:40:20.369 [2024-11-18T11:08:46.254Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:20.369 12:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3168306 00:40:21.303 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:21.561 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:21.819 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0566b74-6fe5-4fcc-8077-9436f96236ea 00:40:21.819 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:40:22.078 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:40:22.078 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:40:22.078 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3165457 00:40:22.078 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3165457 00:40:22.336 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3165457 Killed "${NVMF_APP[@]}" "$@" 00:40:22.336 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:40:22.336 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:40:22.336 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:22.336 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:22.336 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:22.336 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3169893 00:40:22.336 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:40:22.336 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3169893 00:40:22.336 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3169893 ']' 00:40:22.336 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:22.336 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:22.336 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:22.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:22.336 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:22.336 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:22.336 [2024-11-18 12:08:48.104404] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:22.336 [2024-11-18 12:08:48.107067] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:40:22.336 [2024-11-18 12:08:48.107163] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:22.595 [2024-11-18 12:08:48.261273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:22.595 [2024-11-18 12:08:48.394447] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:22.595 [2024-11-18 12:08:48.394550] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:22.595 [2024-11-18 12:08:48.394581] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:22.595 [2024-11-18 12:08:48.394603] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:22.595 [2024-11-18 12:08:48.394626] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:22.595 [2024-11-18 12:08:48.396257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:23.161 [2024-11-18 12:08:48.768781] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:23.161 [2024-11-18 12:08:48.769261] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:23.420 12:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:23.420 12:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:40:23.420 12:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:23.420 12:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:23.420 12:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:23.420 12:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:23.420 12:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:23.677 [2024-11-18 12:08:49.352185] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:40:23.677 [2024-11-18 12:08:49.352395] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:40:23.677 [2024-11-18 12:08:49.352468] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:40:23.677 12:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:40:23.677 12:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev fd9dfd98-5018-4ba5-b27f-6399ca095372 00:40:23.677 12:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=fd9dfd98-5018-4ba5-b27f-6399ca095372 00:40:23.677 12:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:23.677 12:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:40:23.677 12:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:23.677 12:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:23.677 12:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:23.934 12:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fd9dfd98-5018-4ba5-b27f-6399ca095372 -t 2000 00:40:24.193 [ 00:40:24.193 { 00:40:24.193 "name": "fd9dfd98-5018-4ba5-b27f-6399ca095372", 00:40:24.193 "aliases": [ 00:40:24.193 "lvs/lvol" 00:40:24.193 ], 00:40:24.193 "product_name": "Logical Volume", 00:40:24.193 "block_size": 4096, 00:40:24.193 "num_blocks": 38912, 00:40:24.193 "uuid": "fd9dfd98-5018-4ba5-b27f-6399ca095372", 00:40:24.193 "assigned_rate_limits": { 00:40:24.193 "rw_ios_per_sec": 0, 00:40:24.193 "rw_mbytes_per_sec": 0, 00:40:24.193 "r_mbytes_per_sec": 0, 00:40:24.193 "w_mbytes_per_sec": 0 00:40:24.193 }, 00:40:24.193 "claimed": false, 00:40:24.193 "zoned": false, 00:40:24.193 "supported_io_types": { 00:40:24.193 "read": true, 00:40:24.193 "write": true, 00:40:24.193 "unmap": true, 00:40:24.193 "flush": false, 00:40:24.193 "reset": true, 00:40:24.193 "nvme_admin": false, 00:40:24.193 "nvme_io": false, 00:40:24.193 "nvme_io_md": false, 00:40:24.193 "write_zeroes": true, 00:40:24.193 "zcopy": false, 00:40:24.193 "get_zone_info": false, 00:40:24.193 "zone_management": false, 00:40:24.193 "zone_append": false, 00:40:24.193 "compare": false, 00:40:24.193 "compare_and_write": false, 00:40:24.193 "abort": false, 00:40:24.193 "seek_hole": true, 00:40:24.193 "seek_data": true, 00:40:24.193 "copy": false, 00:40:24.193 "nvme_iov_md": false 00:40:24.193 }, 00:40:24.193 "driver_specific": { 00:40:24.193 "lvol": { 00:40:24.193 "lvol_store_uuid": "a0566b74-6fe5-4fcc-8077-9436f96236ea", 00:40:24.193 "base_bdev": "aio_bdev", 00:40:24.193 "thin_provision": false, 00:40:24.193 "num_allocated_clusters": 38, 00:40:24.193 "snapshot": false, 00:40:24.193 "clone": false, 00:40:24.193 "esnap_clone": false 00:40:24.193 } 00:40:24.193 } 00:40:24.193 } 00:40:24.193 ] 00:40:24.193 12:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:40:24.193 12:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0566b74-6fe5-4fcc-8077-9436f96236ea 00:40:24.193 12:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:40:24.451 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:40:24.451 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0566b74-6fe5-4fcc-8077-9436f96236ea 00:40:24.451 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:40:24.710 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:40:24.710 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:24.968 [2024-11-18 12:08:50.721310] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:40:24.968 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0566b74-6fe5-4fcc-8077-9436f96236ea 00:40:24.968 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:40:24.968 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0566b74-6fe5-4fcc-8077-9436f96236ea 00:40:24.968 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:24.968 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:24.968 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:24.968 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:24.968 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:24.968 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:24.968 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:24.968 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:40:24.968 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0566b74-6fe5-4fcc-8077-9436f96236ea 00:40:25.227 request: 00:40:25.227 { 00:40:25.227 "uuid": "a0566b74-6fe5-4fcc-8077-9436f96236ea", 00:40:25.227 "method": "bdev_lvol_get_lvstores", 00:40:25.227 "req_id": 1 00:40:25.227 } 00:40:25.227 Got JSON-RPC error response 00:40:25.227 response: 00:40:25.227 { 00:40:25.227 "code": -19, 00:40:25.227 "message": "No such device" 00:40:25.227 } 00:40:25.227 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:40:25.227 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:25.227 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:25.227 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:25.227 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:25.485 aio_bdev 00:40:25.485 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev fd9dfd98-5018-4ba5-b27f-6399ca095372 00:40:25.485 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=fd9dfd98-5018-4ba5-b27f-6399ca095372 00:40:25.485 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:25.485 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:40:25.485 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:25.485 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:25.485 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:25.744 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fd9dfd98-5018-4ba5-b27f-6399ca095372 -t 2000 00:40:26.002 [ 00:40:26.002 { 00:40:26.002 "name": "fd9dfd98-5018-4ba5-b27f-6399ca095372", 00:40:26.002 "aliases": [ 00:40:26.002 "lvs/lvol" 00:40:26.002 ], 00:40:26.002 "product_name": "Logical Volume", 00:40:26.002 "block_size": 4096, 00:40:26.002 "num_blocks": 38912, 00:40:26.002 "uuid": "fd9dfd98-5018-4ba5-b27f-6399ca095372", 00:40:26.002 "assigned_rate_limits": { 00:40:26.002 "rw_ios_per_sec": 0, 00:40:26.002 "rw_mbytes_per_sec": 0, 00:40:26.002 "r_mbytes_per_sec": 0, 00:40:26.002 "w_mbytes_per_sec": 0 00:40:26.002 }, 00:40:26.002 "claimed": false, 00:40:26.002 "zoned": false, 00:40:26.002 "supported_io_types": { 00:40:26.002 "read": true, 00:40:26.002 "write": true, 00:40:26.002 "unmap": true, 00:40:26.002 "flush": false, 00:40:26.002 "reset": true, 00:40:26.002 "nvme_admin": false, 00:40:26.002 "nvme_io": false, 00:40:26.002 "nvme_io_md": false, 00:40:26.002 "write_zeroes": true, 00:40:26.002 "zcopy": false, 00:40:26.002 "get_zone_info": false, 00:40:26.002 "zone_management": false, 00:40:26.002 "zone_append": false, 00:40:26.002 "compare": false, 00:40:26.002 "compare_and_write": false, 00:40:26.002 "abort": false, 00:40:26.002 "seek_hole": true, 00:40:26.002 "seek_data": true, 00:40:26.002 "copy": false, 00:40:26.002 "nvme_iov_md": false 00:40:26.002 }, 00:40:26.002 "driver_specific": { 00:40:26.002 "lvol": { 00:40:26.002 "lvol_store_uuid": "a0566b74-6fe5-4fcc-8077-9436f96236ea", 00:40:26.002 "base_bdev": "aio_bdev", 00:40:26.002 "thin_provision": false, 00:40:26.002 "num_allocated_clusters": 38, 00:40:26.002 "snapshot": false, 00:40:26.002 "clone": false, 00:40:26.002 "esnap_clone": false 00:40:26.002 } 00:40:26.002 } 00:40:26.002 } 00:40:26.002 ] 00:40:26.002 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:40:26.002 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0566b74-6fe5-4fcc-8077-9436f96236ea 00:40:26.002 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:40:26.569 12:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:40:26.569 12:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0566b74-6fe5-4fcc-8077-9436f96236ea 00:40:26.569 12:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:40:26.827 12:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:40:26.827 12:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fd9dfd98-5018-4ba5-b27f-6399ca095372 00:40:27.086 12:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a0566b74-6fe5-4fcc-8077-9436f96236ea 00:40:27.344 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:27.603 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:27.603 00:40:27.603 real 0m21.907s 00:40:27.603 user 0m38.552s 00:40:27.603 sys 0m4.972s 00:40:27.603 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:27.603 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:27.603 ************************************ 00:40:27.603 END TEST lvs_grow_dirty 00:40:27.603 ************************************ 00:40:27.603 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:40:27.603 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:40:27.603 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:40:27.603 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:40:27.603 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:40:27.603 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:40:27.603 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:40:27.603 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:40:27.603 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:40:27.603 nvmf_trace.0 00:40:27.603 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:40:27.603 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:40:27.603 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:27.603 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:40:27.603 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:27.603 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:40:27.603 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:27.603 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:27.603 rmmod nvme_tcp 00:40:27.603 rmmod nvme_fabrics 00:40:27.603 rmmod nvme_keyring 00:40:27.603 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:27.603 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:40:27.603 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:40:27.603 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3169893 ']' 00:40:27.603 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3169893 00:40:27.603 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3169893 ']' 00:40:27.603 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3169893 00:40:27.603 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:40:27.603 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:27.603 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3169893 00:40:27.862 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:27.862 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:27.862 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3169893' 00:40:27.862 killing process with pid 3169893 00:40:27.862 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3169893 00:40:27.862 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3169893 00:40:28.796 12:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:28.796 12:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:28.796 12:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:28.796 12:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:40:28.796 12:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:40:28.796 12:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:28.796 12:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:40:28.796 12:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:28.796 12:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:28.796 12:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:28.796 12:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:28.796 12:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:30.700 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:30.987 00:40:30.987 real 0m48.126s 00:40:30.987 user 1m0.834s 00:40:30.987 sys 0m8.946s 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:30.987 ************************************ 00:40:30.987 END TEST nvmf_lvs_grow 00:40:30.987 ************************************ 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:30.987 ************************************ 00:40:30.987 START TEST nvmf_bdev_io_wait 00:40:30.987 ************************************ 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:40:30.987 * Looking for test storage... 00:40:30.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:30.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:30.987 --rc genhtml_branch_coverage=1 00:40:30.987 --rc genhtml_function_coverage=1 00:40:30.987 --rc genhtml_legend=1 00:40:30.987 --rc geninfo_all_blocks=1 00:40:30.987 --rc geninfo_unexecuted_blocks=1 00:40:30.987 00:40:30.987 ' 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:30.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:30.987 --rc genhtml_branch_coverage=1 00:40:30.987 --rc genhtml_function_coverage=1 00:40:30.987 --rc genhtml_legend=1 00:40:30.987 --rc geninfo_all_blocks=1 00:40:30.987 --rc geninfo_unexecuted_blocks=1 00:40:30.987 00:40:30.987 ' 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:30.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:30.987 --rc genhtml_branch_coverage=1 00:40:30.987 --rc genhtml_function_coverage=1 00:40:30.987 --rc genhtml_legend=1 00:40:30.987 --rc geninfo_all_blocks=1 00:40:30.987 --rc geninfo_unexecuted_blocks=1 00:40:30.987 00:40:30.987 ' 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:30.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:30.987 --rc genhtml_branch_coverage=1 00:40:30.987 --rc genhtml_function_coverage=1 00:40:30.987 --rc genhtml_legend=1 00:40:30.987 --rc geninfo_all_blocks=1 00:40:30.987 --rc geninfo_unexecuted_blocks=1 00:40:30.987 00:40:30.987 ' 00:40:30.987 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:40:30.988 12:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:32.911 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:32.911 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:40:32.911 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:32.911 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:32.911 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:32.911 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:32.911 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:32.911 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:40:32.911 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:32.911 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:40:32.911 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:40:32.911 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:40:32.911 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:40:32.911 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:40:32.911 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:40:32.911 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:32.911 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:32.911 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:32.911 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:32.911 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:33.170 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:33.170 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:33.170 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:33.170 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:33.170 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:33.170 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:33.170 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:33.170 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:33.170 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:33.170 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:33.170 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:33.170 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:33.170 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:33.170 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:33.170 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:33.170 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:33.170 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:33.170 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:33.170 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:33.170 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:33.170 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:33.170 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:33.170 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:33.170 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:33.170 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:33.170 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:33.170 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:33.170 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:33.171 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:33.171 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:33.171 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:33.171 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:40:33.171 00:40:33.171 --- 10.0.0.2 ping statistics --- 00:40:33.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:33.171 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:33.171 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:33.171 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:40:33.171 00:40:33.171 --- 10.0.0.1 ping statistics --- 00:40:33.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:33.171 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3172620 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3172620 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3172620 ']' 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:33.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:33.171 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:33.171 [2024-11-18 12:08:59.052269] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:33.171 [2024-11-18 12:08:59.054834] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:40:33.171 [2024-11-18 12:08:59.054944] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:33.435 [2024-11-18 12:08:59.199780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:33.694 [2024-11-18 12:08:59.329323] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:33.694 [2024-11-18 12:08:59.329389] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:33.694 [2024-11-18 12:08:59.329412] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:33.694 [2024-11-18 12:08:59.329430] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:33.694 [2024-11-18 12:08:59.329449] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:33.694 [2024-11-18 12:08:59.332030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:33.694 [2024-11-18 12:08:59.332097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:33.694 [2024-11-18 12:08:59.332137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:33.694 [2024-11-18 12:08:59.332147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:33.694 [2024-11-18 12:08:59.332878] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:34.261 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:34.261 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:40:34.261 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:34.261 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:34.261 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:34.261 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:34.261 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:40:34.261 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:34.261 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:34.261 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:34.261 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:40:34.261 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:34.261 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:34.519 [2024-11-18 12:09:00.283337] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:34.519 [2024-11-18 12:09:00.284463] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:34.519 [2024-11-18 12:09:00.285683] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:34.519 [2024-11-18 12:09:00.286832] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:34.519 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:34.519 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:34.519 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:34.519 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:34.519 [2024-11-18 12:09:00.293176] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:34.519 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:34.519 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:34.519 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:34.519 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:34.519 Malloc0 00:40:34.519 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:34.519 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:34.519 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:34.519 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:34.778 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:34.778 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:34.778 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:34.778 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:34.778 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:34.778 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:34.778 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:34.778 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:34.778 [2024-11-18 12:09:00.425425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:34.778 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3172869 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3172871 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:34.779 { 00:40:34.779 "params": { 00:40:34.779 "name": "Nvme$subsystem", 00:40:34.779 "trtype": "$TEST_TRANSPORT", 00:40:34.779 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:34.779 "adrfam": "ipv4", 00:40:34.779 "trsvcid": "$NVMF_PORT", 00:40:34.779 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:34.779 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:34.779 "hdgst": ${hdgst:-false}, 00:40:34.779 "ddgst": ${ddgst:-false} 00:40:34.779 }, 00:40:34.779 "method": "bdev_nvme_attach_controller" 00:40:34.779 } 00:40:34.779 EOF 00:40:34.779 )") 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3172873 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:34.779 { 00:40:34.779 "params": { 00:40:34.779 "name": "Nvme$subsystem", 00:40:34.779 "trtype": "$TEST_TRANSPORT", 00:40:34.779 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:34.779 "adrfam": "ipv4", 00:40:34.779 "trsvcid": "$NVMF_PORT", 00:40:34.779 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:34.779 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:34.779 "hdgst": ${hdgst:-false}, 00:40:34.779 "ddgst": ${ddgst:-false} 00:40:34.779 }, 00:40:34.779 "method": "bdev_nvme_attach_controller" 00:40:34.779 } 00:40:34.779 EOF 00:40:34.779 )") 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3172876 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:34.779 { 00:40:34.779 "params": { 00:40:34.779 "name": "Nvme$subsystem", 00:40:34.779 "trtype": "$TEST_TRANSPORT", 00:40:34.779 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:34.779 "adrfam": "ipv4", 00:40:34.779 "trsvcid": "$NVMF_PORT", 00:40:34.779 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:34.779 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:34.779 "hdgst": ${hdgst:-false}, 00:40:34.779 "ddgst": ${ddgst:-false} 00:40:34.779 }, 00:40:34.779 "method": "bdev_nvme_attach_controller" 00:40:34.779 } 00:40:34.779 EOF 00:40:34.779 )") 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:34.779 { 00:40:34.779 "params": { 00:40:34.779 "name": "Nvme$subsystem", 00:40:34.779 "trtype": "$TEST_TRANSPORT", 00:40:34.779 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:34.779 "adrfam": "ipv4", 00:40:34.779 "trsvcid": "$NVMF_PORT", 00:40:34.779 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:34.779 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:34.779 "hdgst": ${hdgst:-false}, 00:40:34.779 "ddgst": ${ddgst:-false} 00:40:34.779 }, 00:40:34.779 "method": "bdev_nvme_attach_controller" 00:40:34.779 } 00:40:34.779 EOF 00:40:34.779 )") 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3172869 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:34.779 "params": { 00:40:34.779 "name": "Nvme1", 00:40:34.779 "trtype": "tcp", 00:40:34.779 "traddr": "10.0.0.2", 00:40:34.779 "adrfam": "ipv4", 00:40:34.779 "trsvcid": "4420", 00:40:34.779 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:34.779 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:34.779 "hdgst": false, 00:40:34.779 "ddgst": false 00:40:34.779 }, 00:40:34.779 "method": "bdev_nvme_attach_controller" 00:40:34.779 }' 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:34.779 "params": { 00:40:34.779 "name": "Nvme1", 00:40:34.779 "trtype": "tcp", 00:40:34.779 "traddr": "10.0.0.2", 00:40:34.779 "adrfam": "ipv4", 00:40:34.779 "trsvcid": "4420", 00:40:34.779 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:34.779 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:34.779 "hdgst": false, 00:40:34.779 "ddgst": false 00:40:34.779 }, 00:40:34.779 "method": "bdev_nvme_attach_controller" 00:40:34.779 }' 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:40:34.779 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:34.779 "params": { 00:40:34.779 "name": "Nvme1", 00:40:34.779 "trtype": "tcp", 00:40:34.779 "traddr": "10.0.0.2", 00:40:34.779 "adrfam": "ipv4", 00:40:34.779 "trsvcid": "4420", 00:40:34.779 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:34.779 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:34.779 "hdgst": false, 00:40:34.779 "ddgst": false 00:40:34.779 }, 00:40:34.779 "method": "bdev_nvme_attach_controller" 00:40:34.779 }' 00:40:34.780 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:40:34.780 12:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:34.780 "params": { 00:40:34.780 "name": "Nvme1", 00:40:34.780 "trtype": "tcp", 00:40:34.780 "traddr": "10.0.0.2", 00:40:34.780 "adrfam": "ipv4", 00:40:34.780 "trsvcid": "4420", 00:40:34.780 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:34.780 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:34.780 "hdgst": false, 00:40:34.780 "ddgst": false 00:40:34.780 }, 00:40:34.780 "method": "bdev_nvme_attach_controller" 00:40:34.780 }' 00:40:34.780 [2024-11-18 12:09:00.513706] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:40:34.780 [2024-11-18 12:09:00.513741] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:40:34.780 [2024-11-18 12:09:00.513882] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-18 12:09:00.513883] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:40:34.780 --proc-type=auto ] 00:40:34.780 [2024-11-18 12:09:00.514845] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:40:34.780 [2024-11-18 12:09:00.514845] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:40:34.780 [2024-11-18 12:09:00.514990] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-18 12:09:00.514991] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:40:34.780 --proc-type=auto ] 00:40:35.038 [2024-11-18 12:09:00.745418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:35.038 [2024-11-18 12:09:00.816972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:35.038 [2024-11-18 12:09:00.862852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:40:35.296 [2024-11-18 12:09:00.925607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:35.296 [2024-11-18 12:09:00.936522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:40:35.296 [2024-11-18 12:09:01.030306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:35.296 [2024-11-18 12:09:01.046226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:40:35.296 [2024-11-18 12:09:01.151581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:40:35.554 Running I/O for 1 seconds... 00:40:35.554 Running I/O for 1 seconds... 00:40:35.554 Running I/O for 1 seconds... 00:40:35.812 Running I/O for 1 seconds... 00:40:36.744 138360.00 IOPS, 540.47 MiB/s 00:40:36.744 Latency(us) 00:40:36.744 [2024-11-18T11:09:02.629Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:36.744 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:40:36.744 Nvme1n1 : 1.00 138064.67 539.32 0.00 0.00 922.38 406.57 2123.85 00:40:36.744 [2024-11-18T11:09:02.629Z] =================================================================================================================== 00:40:36.744 [2024-11-18T11:09:02.629Z] Total : 138064.67 539.32 0.00 0.00 922.38 406.57 2123.85 00:40:36.744 8857.00 IOPS, 34.60 MiB/s 00:40:36.744 Latency(us) 00:40:36.744 [2024-11-18T11:09:02.629Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:36.744 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:40:36.744 Nvme1n1 : 1.01 8924.78 34.86 0.00 0.00 14279.45 3094.76 19806.44 00:40:36.744 [2024-11-18T11:09:02.629Z] =================================================================================================================== 00:40:36.744 [2024-11-18T11:09:02.629Z] Total : 8924.78 34.86 0.00 0.00 14279.45 3094.76 19806.44 00:40:36.744 7013.00 IOPS, 27.39 MiB/s 00:40:36.744 Latency(us) 00:40:36.744 [2024-11-18T11:09:02.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:36.745 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:40:36.745 Nvme1n1 : 1.01 7079.37 27.65 0.00 0.00 17983.55 7815.77 26602.76 00:40:36.745 [2024-11-18T11:09:02.630Z] =================================================================================================================== 00:40:36.745 [2024-11-18T11:09:02.630Z] Total : 7079.37 27.65 0.00 0.00 17983.55 7815.77 26602.76 00:40:37.002 7225.00 IOPS, 28.22 MiB/s 00:40:37.002 Latency(us) 00:40:37.002 [2024-11-18T11:09:02.887Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:37.002 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:40:37.002 Nvme1n1 : 1.01 7275.05 28.42 0.00 0.00 17494.63 6505.05 24466.77 00:40:37.002 [2024-11-18T11:09:02.887Z] =================================================================================================================== 00:40:37.002 [2024-11-18T11:09:02.887Z] Total : 7275.05 28.42 0.00 0.00 17494.63 6505.05 24466.77 00:40:37.260 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3172871 00:40:37.518 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3172873 00:40:37.518 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3172876 00:40:37.518 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:37.518 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:37.518 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:37.518 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:37.518 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:40:37.518 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:40:37.518 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:37.518 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:40:37.518 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:37.518 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:40:37.518 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:37.518 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:37.518 rmmod nvme_tcp 00:40:37.518 rmmod nvme_fabrics 00:40:37.518 rmmod nvme_keyring 00:40:37.518 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:37.518 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:40:37.518 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:40:37.518 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3172620 ']' 00:40:37.518 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3172620 00:40:37.518 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3172620 ']' 00:40:37.518 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3172620 00:40:37.518 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:40:37.518 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:37.518 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3172620 00:40:37.776 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:37.776 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:37.776 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3172620' 00:40:37.776 killing process with pid 3172620 00:40:37.776 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3172620 00:40:37.776 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3172620 00:40:38.710 12:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:38.710 12:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:38.710 12:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:38.710 12:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:40:38.710 12:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:40:38.710 12:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:40:38.710 12:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:38.710 12:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:38.710 12:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:38.710 12:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:38.710 12:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:38.710 12:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:41.251 00:40:41.251 real 0m9.885s 00:40:41.251 user 0m21.549s 00:40:41.251 sys 0m5.152s 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:41.251 ************************************ 00:40:41.251 END TEST nvmf_bdev_io_wait 00:40:41.251 ************************************ 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:41.251 ************************************ 00:40:41.251 START TEST nvmf_queue_depth 00:40:41.251 ************************************ 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:40:41.251 * Looking for test storage... 00:40:41.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:41.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:41.251 --rc genhtml_branch_coverage=1 00:40:41.251 --rc genhtml_function_coverage=1 00:40:41.251 --rc genhtml_legend=1 00:40:41.251 --rc geninfo_all_blocks=1 00:40:41.251 --rc geninfo_unexecuted_blocks=1 00:40:41.251 00:40:41.251 ' 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:41.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:41.251 --rc genhtml_branch_coverage=1 00:40:41.251 --rc genhtml_function_coverage=1 00:40:41.251 --rc genhtml_legend=1 00:40:41.251 --rc geninfo_all_blocks=1 00:40:41.251 --rc geninfo_unexecuted_blocks=1 00:40:41.251 00:40:41.251 ' 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:41.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:41.251 --rc genhtml_branch_coverage=1 00:40:41.251 --rc genhtml_function_coverage=1 00:40:41.251 --rc genhtml_legend=1 00:40:41.251 --rc geninfo_all_blocks=1 00:40:41.251 --rc geninfo_unexecuted_blocks=1 00:40:41.251 00:40:41.251 ' 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:41.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:41.251 --rc genhtml_branch_coverage=1 00:40:41.251 --rc genhtml_function_coverage=1 00:40:41.251 --rc genhtml_legend=1 00:40:41.251 --rc geninfo_all_blocks=1 00:40:41.251 --rc geninfo_unexecuted_blocks=1 00:40:41.251 00:40:41.251 ' 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:41.251 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:40:41.252 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:43.154 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:43.154 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:43.154 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:43.155 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:43.155 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:43.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:43.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:40:43.155 00:40:43.155 --- 10.0.0.2 ping statistics --- 00:40:43.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:43.155 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:43.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:43.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:40:43.155 00:40:43.155 --- 10.0.0.1 ping statistics --- 00:40:43.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:43.155 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3175767 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3175767 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3175767 ']' 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:43.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:43.155 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:43.155 [2024-11-18 12:09:08.873651] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:43.155 [2024-11-18 12:09:08.876354] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:40:43.155 [2024-11-18 12:09:08.876465] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:43.155 [2024-11-18 12:09:09.025275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:43.414 [2024-11-18 12:09:09.159881] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:43.414 [2024-11-18 12:09:09.159950] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:43.414 [2024-11-18 12:09:09.159979] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:43.414 [2024-11-18 12:09:09.160000] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:43.414 [2024-11-18 12:09:09.160022] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:43.414 [2024-11-18 12:09:09.161629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:43.672 [2024-11-18 12:09:09.519137] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:43.672 [2024-11-18 12:09:09.519602] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:44.239 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:44.239 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:40:44.239 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:44.239 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:44.239 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:44.239 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:44.239 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:44.239 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:44.239 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:44.239 [2024-11-18 12:09:09.874700] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:44.239 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.239 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:44.239 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:44.239 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:44.239 Malloc0 00:40:44.239 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.239 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:44.239 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:44.239 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:44.239 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.239 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:44.239 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:44.239 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:44.239 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.239 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:44.239 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:44.239 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:44.239 [2024-11-18 12:09:09.990863] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:44.239 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.239 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3175961 00:40:44.239 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:40:44.239 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:44.239 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3175961 /var/tmp/bdevperf.sock 00:40:44.239 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3175961 ']' 00:40:44.239 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:44.239 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:44.239 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:44.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:44.239 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:44.239 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:44.239 [2024-11-18 12:09:10.078915] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:40:44.239 [2024-11-18 12:09:10.079058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3175961 ] 00:40:44.497 [2024-11-18 12:09:10.226309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:44.497 [2024-11-18 12:09:10.365396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:45.431 12:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:45.431 12:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:40:45.431 12:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:40:45.431 12:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:45.431 12:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:45.431 NVMe0n1 00:40:45.431 12:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:45.431 12:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:45.431 Running I/O for 10 seconds... 00:40:47.744 5492.00 IOPS, 21.45 MiB/s [2024-11-18T11:09:14.562Z] 6014.50 IOPS, 23.49 MiB/s [2024-11-18T11:09:15.496Z] 6144.00 IOPS, 24.00 MiB/s [2024-11-18T11:09:16.430Z] 6134.75 IOPS, 23.96 MiB/s [2024-11-18T11:09:17.363Z] 6106.40 IOPS, 23.85 MiB/s [2024-11-18T11:09:18.297Z] 6105.83 IOPS, 23.85 MiB/s [2024-11-18T11:09:19.671Z] 6094.57 IOPS, 23.81 MiB/s [2024-11-18T11:09:20.603Z] 6082.00 IOPS, 23.76 MiB/s [2024-11-18T11:09:21.536Z] 6068.11 IOPS, 23.70 MiB/s [2024-11-18T11:09:21.536Z] 6063.50 IOPS, 23.69 MiB/s 00:40:55.651 Latency(us) 00:40:55.651 [2024-11-18T11:09:21.536Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:55.651 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:40:55.651 Verification LBA range: start 0x0 length 0x4000 00:40:55.651 NVMe0n1 : 10.09 6099.49 23.83 0.00 0.00 166936.83 10485.76 100197.26 00:40:55.651 [2024-11-18T11:09:21.536Z] =================================================================================================================== 00:40:55.651 [2024-11-18T11:09:21.536Z] Total : 6099.49 23.83 0.00 0.00 166936.83 10485.76 100197.26 00:40:55.651 { 00:40:55.651 "results": [ 00:40:55.651 { 00:40:55.651 "job": "NVMe0n1", 00:40:55.651 "core_mask": "0x1", 00:40:55.651 "workload": "verify", 00:40:55.651 "status": "finished", 00:40:55.651 "verify_range": { 00:40:55.651 "start": 0, 00:40:55.651 "length": 16384 00:40:55.651 }, 00:40:55.651 "queue_depth": 1024, 00:40:55.651 "io_size": 4096, 00:40:55.651 "runtime": 10.089369, 00:40:55.651 "iops": 6099.48947253292, 00:40:55.651 "mibps": 23.82613075208172, 00:40:55.651 "io_failed": 0, 00:40:55.652 "io_timeout": 0, 00:40:55.652 "avg_latency_us": 166936.83296447957, 00:40:55.652 "min_latency_us": 10485.76, 00:40:55.652 "max_latency_us": 100197.26222222223 00:40:55.652 } 00:40:55.652 ], 00:40:55.652 "core_count": 1 00:40:55.652 } 00:40:55.652 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3175961 00:40:55.652 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3175961 ']' 00:40:55.652 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3175961 00:40:55.652 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:40:55.652 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:55.652 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3175961 00:40:55.652 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:55.652 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:55.652 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3175961' 00:40:55.652 killing process with pid 3175961 00:40:55.652 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3175961 00:40:55.652 Received shutdown signal, test time was about 10.000000 seconds 00:40:55.652 00:40:55.652 Latency(us) 00:40:55.652 [2024-11-18T11:09:21.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:55.652 [2024-11-18T11:09:21.537Z] =================================================================================================================== 00:40:55.652 [2024-11-18T11:09:21.537Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:55.652 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3175961 00:40:56.585 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:40:56.585 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:40:56.585 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:56.585 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:40:56.585 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:56.585 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:40:56.585 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:56.585 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:56.585 rmmod nvme_tcp 00:40:56.585 rmmod nvme_fabrics 00:40:56.585 rmmod nvme_keyring 00:40:56.585 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:56.585 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:40:56.585 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:40:56.585 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3175767 ']' 00:40:56.585 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3175767 00:40:56.585 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3175767 ']' 00:40:56.585 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3175767 00:40:56.585 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:40:56.585 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:56.585 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3175767 00:40:56.585 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:56.585 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:56.585 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3175767' 00:40:56.585 killing process with pid 3175767 00:40:56.585 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3175767 00:40:56.585 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3175767 00:40:57.959 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:57.959 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:57.959 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:57.959 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:40:57.959 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:40:57.959 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:57.960 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:40:57.960 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:57.960 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:57.960 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:57.960 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:57.960 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:59.923 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:59.923 00:40:59.923 real 0m19.205s 00:40:59.923 user 0m26.677s 00:40:59.924 sys 0m3.634s 00:40:59.924 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:59.924 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:59.924 ************************************ 00:40:59.924 END TEST nvmf_queue_depth 00:40:59.924 ************************************ 00:40:59.924 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:40:59.924 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:59.924 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:59.924 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:00.183 ************************************ 00:41:00.183 START TEST nvmf_target_multipath 00:41:00.183 ************************************ 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:41:00.183 * Looking for test storage... 00:41:00.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:00.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:00.183 --rc genhtml_branch_coverage=1 00:41:00.183 --rc genhtml_function_coverage=1 00:41:00.183 --rc genhtml_legend=1 00:41:00.183 --rc geninfo_all_blocks=1 00:41:00.183 --rc geninfo_unexecuted_blocks=1 00:41:00.183 00:41:00.183 ' 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:00.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:00.183 --rc genhtml_branch_coverage=1 00:41:00.183 --rc genhtml_function_coverage=1 00:41:00.183 --rc genhtml_legend=1 00:41:00.183 --rc geninfo_all_blocks=1 00:41:00.183 --rc geninfo_unexecuted_blocks=1 00:41:00.183 00:41:00.183 ' 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:00.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:00.183 --rc genhtml_branch_coverage=1 00:41:00.183 --rc genhtml_function_coverage=1 00:41:00.183 --rc genhtml_legend=1 00:41:00.183 --rc geninfo_all_blocks=1 00:41:00.183 --rc geninfo_unexecuted_blocks=1 00:41:00.183 00:41:00.183 ' 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:00.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:00.183 --rc genhtml_branch_coverage=1 00:41:00.183 --rc genhtml_function_coverage=1 00:41:00.183 --rc genhtml_legend=1 00:41:00.183 --rc geninfo_all_blocks=1 00:41:00.183 --rc geninfo_unexecuted_blocks=1 00:41:00.183 00:41:00.183 ' 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:00.183 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:41:00.184 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:00.184 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:41:00.184 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:00.184 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:00.184 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:00.184 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:00.184 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:00.184 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:00.184 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:00.184 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:00.184 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:00.184 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:00.184 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:00.184 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:00.184 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:41:00.184 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:00.184 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:41:00.184 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:00.184 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:00.184 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:00.184 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:00.184 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:00.184 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:00.184 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:00.184 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:00.184 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:00.184 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:00.184 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:41:00.184 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:41:02.085 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:02.085 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:41:02.085 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:02.085 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:02.085 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:02.085 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:02.085 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:02.085 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:41:02.085 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:02.085 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:41:02.085 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:41:02.085 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:41:02.085 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:41:02.085 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:41:02.085 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:41:02.085 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:02.085 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:02.085 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:02.085 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:02.085 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:02.085 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:02.085 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:02.085 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:02.085 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:02.085 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:02.085 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:02.085 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:02.085 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:02.085 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:02.085 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:02.085 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:02.086 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:02.086 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:02.086 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:02.086 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:02.086 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:02.345 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:02.345 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:02.345 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:02.345 12:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:02.345 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:02.345 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:02.346 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:02.346 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:02.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:02.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.336 ms 00:41:02.346 00:41:02.346 --- 10.0.0.2 ping statistics --- 00:41:02.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:02.346 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:41:02.346 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:02.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:02.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:41:02.346 00:41:02.346 --- 10.0.0.1 ping statistics --- 00:41:02.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:02.346 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:41:02.346 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:02.346 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:41:02.346 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:02.346 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:02.346 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:02.346 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:02.346 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:02.346 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:02.346 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:02.346 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:41:02.346 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:41:02.346 only one NIC for nvmf test 00:41:02.346 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:41:02.346 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:02.346 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:41:02.346 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:02.346 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:41:02.346 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:02.346 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:02.346 rmmod nvme_tcp 00:41:02.346 rmmod nvme_fabrics 00:41:02.346 rmmod nvme_keyring 00:41:02.346 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:02.346 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:41:02.346 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:41:02.346 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:41:02.346 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:02.346 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:02.346 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:02.346 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:41:02.346 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:41:02.346 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:02.346 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:41:02.346 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:02.346 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:02.346 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:02.346 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:02.346 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:04.879 00:41:04.879 real 0m4.360s 00:41:04.879 user 0m0.875s 00:41:04.879 sys 0m1.477s 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:41:04.879 ************************************ 00:41:04.879 END TEST nvmf_target_multipath 00:41:04.879 ************************************ 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:04.879 ************************************ 00:41:04.879 START TEST nvmf_zcopy 00:41:04.879 ************************************ 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:41:04.879 * Looking for test storage... 00:41:04.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:04.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:04.879 --rc genhtml_branch_coverage=1 00:41:04.879 --rc genhtml_function_coverage=1 00:41:04.879 --rc genhtml_legend=1 00:41:04.879 --rc geninfo_all_blocks=1 00:41:04.879 --rc geninfo_unexecuted_blocks=1 00:41:04.879 00:41:04.879 ' 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:04.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:04.879 --rc genhtml_branch_coverage=1 00:41:04.879 --rc genhtml_function_coverage=1 00:41:04.879 --rc genhtml_legend=1 00:41:04.879 --rc geninfo_all_blocks=1 00:41:04.879 --rc geninfo_unexecuted_blocks=1 00:41:04.879 00:41:04.879 ' 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:04.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:04.879 --rc genhtml_branch_coverage=1 00:41:04.879 --rc genhtml_function_coverage=1 00:41:04.879 --rc genhtml_legend=1 00:41:04.879 --rc geninfo_all_blocks=1 00:41:04.879 --rc geninfo_unexecuted_blocks=1 00:41:04.879 00:41:04.879 ' 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:04.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:04.879 --rc genhtml_branch_coverage=1 00:41:04.879 --rc genhtml_function_coverage=1 00:41:04.879 --rc genhtml_legend=1 00:41:04.879 --rc geninfo_all_blocks=1 00:41:04.879 --rc geninfo_unexecuted_blocks=1 00:41:04.879 00:41:04.879 ' 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:04.879 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:41:04.880 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:06.782 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:06.782 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:41:06.782 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:06.782 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:06.782 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:06.782 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:06.782 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:06.782 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:41:06.782 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:06.782 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:41:06.782 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:41:06.782 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:41:06.782 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:41:06.782 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:41:06.782 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:41:06.782 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:06.782 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:06.782 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:06.782 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:06.782 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:06.782 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:06.782 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:06.782 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:06.782 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:06.782 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:06.782 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:06.782 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:06.782 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:06.782 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:06.782 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:06.782 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:06.783 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:06.783 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:06.783 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:06.783 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:06.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:06.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:41:06.783 00:41:06.783 --- 10.0.0.2 ping statistics --- 00:41:06.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:06.783 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:06.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:06.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:41:06.783 00:41:06.783 --- 10.0.0.1 ping statistics --- 00:41:06.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:06.783 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3181398 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3181398 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3181398 ']' 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:06.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:06.783 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:06.783 [2024-11-18 12:09:32.569857] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:06.784 [2024-11-18 12:09:32.572368] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:41:06.784 [2024-11-18 12:09:32.572485] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:07.042 [2024-11-18 12:09:32.718729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:07.042 [2024-11-18 12:09:32.836435] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:07.042 [2024-11-18 12:09:32.836533] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:07.042 [2024-11-18 12:09:32.836560] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:07.042 [2024-11-18 12:09:32.836579] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:07.042 [2024-11-18 12:09:32.836599] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:07.043 [2024-11-18 12:09:32.837999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:07.302 [2024-11-18 12:09:33.161952] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:07.302 [2024-11-18 12:09:33.162343] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:07.869 [2024-11-18 12:09:33.555052] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:07.869 [2024-11-18 12:09:33.571270] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:07.869 malloc0 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:07.869 { 00:41:07.869 "params": { 00:41:07.869 "name": "Nvme$subsystem", 00:41:07.869 "trtype": "$TEST_TRANSPORT", 00:41:07.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:07.869 "adrfam": "ipv4", 00:41:07.869 "trsvcid": "$NVMF_PORT", 00:41:07.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:07.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:07.869 "hdgst": ${hdgst:-false}, 00:41:07.869 "ddgst": ${ddgst:-false} 00:41:07.869 }, 00:41:07.869 "method": "bdev_nvme_attach_controller" 00:41:07.869 } 00:41:07.869 EOF 00:41:07.869 )") 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:41:07.869 12:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:07.869 "params": { 00:41:07.869 "name": "Nvme1", 00:41:07.869 "trtype": "tcp", 00:41:07.869 "traddr": "10.0.0.2", 00:41:07.869 "adrfam": "ipv4", 00:41:07.869 "trsvcid": "4420", 00:41:07.869 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:07.869 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:07.869 "hdgst": false, 00:41:07.869 "ddgst": false 00:41:07.869 }, 00:41:07.869 "method": "bdev_nvme_attach_controller" 00:41:07.869 }' 00:41:07.869 [2024-11-18 12:09:33.732669] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:41:07.869 [2024-11-18 12:09:33.732816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3181556 ] 00:41:08.128 [2024-11-18 12:09:33.890701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:08.387 [2024-11-18 12:09:34.028927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:08.953 Running I/O for 10 seconds... 00:41:10.822 3824.00 IOPS, 29.88 MiB/s [2024-11-18T11:09:38.080Z] 3806.50 IOPS, 29.74 MiB/s [2024-11-18T11:09:39.013Z] 3803.00 IOPS, 29.71 MiB/s [2024-11-18T11:09:39.947Z] 3792.75 IOPS, 29.63 MiB/s [2024-11-18T11:09:40.881Z] 3790.00 IOPS, 29.61 MiB/s [2024-11-18T11:09:41.816Z] 3818.83 IOPS, 29.83 MiB/s [2024-11-18T11:09:42.751Z] 3816.29 IOPS, 29.81 MiB/s [2024-11-18T11:09:44.126Z] 3809.50 IOPS, 29.76 MiB/s [2024-11-18T11:09:45.062Z] 3804.11 IOPS, 29.72 MiB/s [2024-11-18T11:09:45.062Z] 3806.40 IOPS, 29.74 MiB/s 00:41:19.177 Latency(us) 00:41:19.177 [2024-11-18T11:09:45.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:19.177 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:41:19.177 Verification LBA range: start 0x0 length 0x1000 00:41:19.177 Nvme1n1 : 10.03 3808.99 29.76 0.00 0.00 33515.87 5121.52 41554.68 00:41:19.177 [2024-11-18T11:09:45.062Z] =================================================================================================================== 00:41:19.177 [2024-11-18T11:09:45.062Z] Total : 3808.99 29.76 0.00 0.00 33515.87 5121.52 41554.68 00:41:19.744 12:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3182861 00:41:19.744 12:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:41:19.744 12:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:19.744 12:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:41:19.744 12:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:41:19.744 12:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:41:19.744 12:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:41:19.744 12:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:19.744 12:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:19.744 { 00:41:19.744 "params": { 00:41:19.744 "name": "Nvme$subsystem", 00:41:19.744 "trtype": "$TEST_TRANSPORT", 00:41:19.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:19.744 "adrfam": "ipv4", 00:41:19.744 "trsvcid": "$NVMF_PORT", 00:41:19.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:19.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:19.744 "hdgst": ${hdgst:-false}, 00:41:19.744 "ddgst": ${ddgst:-false} 00:41:19.744 }, 00:41:19.744 "method": "bdev_nvme_attach_controller" 00:41:19.744 } 00:41:19.744 EOF 00:41:19.744 )") 00:41:19.744 [2024-11-18 12:09:45.566952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.744 [2024-11-18 12:09:45.567003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.744 12:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:41:19.744 12:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:41:19.744 12:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:41:19.744 12:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:19.744 "params": { 00:41:19.744 "name": "Nvme1", 00:41:19.744 "trtype": "tcp", 00:41:19.744 "traddr": "10.0.0.2", 00:41:19.744 "adrfam": "ipv4", 00:41:19.744 "trsvcid": "4420", 00:41:19.744 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:19.744 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:19.744 "hdgst": false, 00:41:19.744 "ddgst": false 00:41:19.744 }, 00:41:19.744 "method": "bdev_nvme_attach_controller" 00:41:19.744 }' 00:41:19.744 [2024-11-18 12:09:45.574881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.744 [2024-11-18 12:09:45.574911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.744 [2024-11-18 12:09:45.582848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.744 [2024-11-18 12:09:45.582875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.744 [2024-11-18 12:09:45.590866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.744 [2024-11-18 12:09:45.590894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.744 [2024-11-18 12:09:45.598889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.744 [2024-11-18 12:09:45.598916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.744 [2024-11-18 12:09:45.606829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.744 [2024-11-18 12:09:45.606869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.744 [2024-11-18 12:09:45.614868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.744 [2024-11-18 12:09:45.614895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.744 [2024-11-18 12:09:45.622856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.744 [2024-11-18 12:09:45.622884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.003 [2024-11-18 12:09:45.630817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.003 [2024-11-18 12:09:45.630860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.003 [2024-11-18 12:09:45.638856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.003 [2024-11-18 12:09:45.638882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.003 [2024-11-18 12:09:45.646843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.003 [2024-11-18 12:09:45.646869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.003 [2024-11-18 12:09:45.649334] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:41:20.003 [2024-11-18 12:09:45.649464] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3182861 ] 00:41:20.003 [2024-11-18 12:09:45.654859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.003 [2024-11-18 12:09:45.654887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.003 [2024-11-18 12:09:45.662860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.003 [2024-11-18 12:09:45.662894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.003 [2024-11-18 12:09:45.670818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.003 [2024-11-18 12:09:45.670859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.003 [2024-11-18 12:09:45.678869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.003 [2024-11-18 12:09:45.678897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.003 [2024-11-18 12:09:45.686852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.003 [2024-11-18 12:09:45.686879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.003 [2024-11-18 12:09:45.694854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.003 [2024-11-18 12:09:45.694881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.003 [2024-11-18 12:09:45.702853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.003 [2024-11-18 12:09:45.702879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.003 [2024-11-18 12:09:45.710809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.003 [2024-11-18 12:09:45.710837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.003 [2024-11-18 12:09:45.718859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.003 [2024-11-18 12:09:45.718886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.003 [2024-11-18 12:09:45.726872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.003 [2024-11-18 12:09:45.726901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.003 [2024-11-18 12:09:45.734837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.003 [2024-11-18 12:09:45.734881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.003 [2024-11-18 12:09:45.742861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.003 [2024-11-18 12:09:45.742890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.003 [2024-11-18 12:09:45.750860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.003 [2024-11-18 12:09:45.750887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.003 [2024-11-18 12:09:45.758848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.003 [2024-11-18 12:09:45.758876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.003 [2024-11-18 12:09:45.766862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.003 [2024-11-18 12:09:45.766889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.003 [2024-11-18 12:09:45.774849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.003 [2024-11-18 12:09:45.774876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.003 [2024-11-18 12:09:45.782860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.003 [2024-11-18 12:09:45.782887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.003 [2024-11-18 12:09:45.787418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:20.003 [2024-11-18 12:09:45.790882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.003 [2024-11-18 12:09:45.790909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.003 [2024-11-18 12:09:45.798834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.003 [2024-11-18 12:09:45.798880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.003 [2024-11-18 12:09:45.806958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.003 [2024-11-18 12:09:45.807016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.003 [2024-11-18 12:09:45.814888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.003 [2024-11-18 12:09:45.814930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.003 [2024-11-18 12:09:45.822828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.003 [2024-11-18 12:09:45.822872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.003 [2024-11-18 12:09:45.830863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.004 [2024-11-18 12:09:45.830890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.004 [2024-11-18 12:09:45.838833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.004 [2024-11-18 12:09:45.838882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.004 [2024-11-18 12:09:45.846875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.004 [2024-11-18 12:09:45.846910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.004 [2024-11-18 12:09:45.854868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.004 [2024-11-18 12:09:45.854903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.004 [2024-11-18 12:09:45.862828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.004 [2024-11-18 12:09:45.862876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.004 [2024-11-18 12:09:45.870874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.004 [2024-11-18 12:09:45.870908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.004 [2024-11-18 12:09:45.878867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.004 [2024-11-18 12:09:45.878901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.004 [2024-11-18 12:09:45.886859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.004 [2024-11-18 12:09:45.886893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.263 [2024-11-18 12:09:45.894865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.263 [2024-11-18 12:09:45.894899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.263 [2024-11-18 12:09:45.902854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.263 [2024-11-18 12:09:45.902888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.263 [2024-11-18 12:09:45.910870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.263 [2024-11-18 12:09:45.910903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.263 [2024-11-18 12:09:45.918869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.263 [2024-11-18 12:09:45.918904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.263 [2024-11-18 12:09:45.923095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:20.263 [2024-11-18 12:09:45.926840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.263 [2024-11-18 12:09:45.926896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.263 [2024-11-18 12:09:45.934883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.263 [2024-11-18 12:09:45.934919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.263 [2024-11-18 12:09:45.942961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.263 [2024-11-18 12:09:45.943024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.263 [2024-11-18 12:09:45.950906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.263 [2024-11-18 12:09:45.950948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.263 [2024-11-18 12:09:45.958886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.263 [2024-11-18 12:09:45.958923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.263 [2024-11-18 12:09:45.966846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.263 [2024-11-18 12:09:45.966878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.263 [2024-11-18 12:09:45.974874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.263 [2024-11-18 12:09:45.974906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.263 [2024-11-18 12:09:45.982892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.263 [2024-11-18 12:09:45.982925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.263 [2024-11-18 12:09:45.990827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.263 [2024-11-18 12:09:45.990873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.263 [2024-11-18 12:09:45.998861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.263 [2024-11-18 12:09:45.998894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.263 [2024-11-18 12:09:46.006856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.263 [2024-11-18 12:09:46.006889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.263 [2024-11-18 12:09:46.014904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.263 [2024-11-18 12:09:46.014954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.263 [2024-11-18 12:09:46.022938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.263 [2024-11-18 12:09:46.022989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.263 [2024-11-18 12:09:46.030911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.263 [2024-11-18 12:09:46.030960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.263 [2024-11-18 12:09:46.038939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.263 [2024-11-18 12:09:46.038991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.263 [2024-11-18 12:09:46.046887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.263 [2024-11-18 12:09:46.046921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.263 [2024-11-18 12:09:46.054853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.263 [2024-11-18 12:09:46.054886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.263 [2024-11-18 12:09:46.062857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.263 [2024-11-18 12:09:46.062885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.263 [2024-11-18 12:09:46.070873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.263 [2024-11-18 12:09:46.070916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.263 [2024-11-18 12:09:46.078845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.263 [2024-11-18 12:09:46.078892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.263 [2024-11-18 12:09:46.086875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.263 [2024-11-18 12:09:46.086908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.263 [2024-11-18 12:09:46.094827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.263 [2024-11-18 12:09:46.094873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.263 [2024-11-18 12:09:46.102869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.263 [2024-11-18 12:09:46.102901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.263 [2024-11-18 12:09:46.110869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.263 [2024-11-18 12:09:46.110901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.263 [2024-11-18 12:09:46.118852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.263 [2024-11-18 12:09:46.118886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.263 [2024-11-18 12:09:46.126866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.263 [2024-11-18 12:09:46.126900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.263 [2024-11-18 12:09:46.134862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.263 [2024-11-18 12:09:46.134895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.263 [2024-11-18 12:09:46.142853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.263 [2024-11-18 12:09:46.142885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.522 [2024-11-18 12:09:46.150869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.522 [2024-11-18 12:09:46.150903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.522 [2024-11-18 12:09:46.158830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.522 [2024-11-18 12:09:46.158878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.522 [2024-11-18 12:09:46.166939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.522 [2024-11-18 12:09:46.166989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.522 [2024-11-18 12:09:46.174949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.522 [2024-11-18 12:09:46.175001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.522 [2024-11-18 12:09:46.182901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.522 [2024-11-18 12:09:46.182953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.522 [2024-11-18 12:09:46.190885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.522 [2024-11-18 12:09:46.190918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.522 [2024-11-18 12:09:46.198863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.522 [2024-11-18 12:09:46.198896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.522 [2024-11-18 12:09:46.206826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.522 [2024-11-18 12:09:46.206871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.522 [2024-11-18 12:09:46.214862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.522 [2024-11-18 12:09:46.214895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.522 [2024-11-18 12:09:46.222839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.522 [2024-11-18 12:09:46.222872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.522 [2024-11-18 12:09:46.230868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.522 [2024-11-18 12:09:46.230901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.522 [2024-11-18 12:09:46.238864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.522 [2024-11-18 12:09:46.238896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.522 [2024-11-18 12:09:46.246829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.522 [2024-11-18 12:09:46.246874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.522 [2024-11-18 12:09:46.254888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.522 [2024-11-18 12:09:46.254920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.522 [2024-11-18 12:09:46.262888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.522 [2024-11-18 12:09:46.262920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.522 [2024-11-18 12:09:46.270868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-11-18 12:09:46.270901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-11-18 12:09:46.278853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-11-18 12:09:46.278886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-11-18 12:09:46.286834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-11-18 12:09:46.286868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-11-18 12:09:46.294875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-11-18 12:09:46.294912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-11-18 12:09:46.302863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-11-18 12:09:46.302900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-11-18 12:09:46.310888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-11-18 12:09:46.310925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-11-18 12:09:46.318871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-11-18 12:09:46.318907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-11-18 12:09:46.326863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-11-18 12:09:46.326899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-11-18 12:09:46.334872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-11-18 12:09:46.334906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-11-18 12:09:46.342857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-11-18 12:09:46.342890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-11-18 12:09:46.350835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-11-18 12:09:46.350881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-11-18 12:09:46.358887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-11-18 12:09:46.358921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-11-18 12:09:46.366871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-11-18 12:09:46.366907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-11-18 12:09:46.374847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-11-18 12:09:46.374882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-11-18 12:09:46.382870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-11-18 12:09:46.382904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-11-18 12:09:46.390854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-11-18 12:09:46.390888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-11-18 12:09:46.398831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-11-18 12:09:46.398865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.523 [2024-11-18 12:09:46.406864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.523 [2024-11-18 12:09:46.406898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.781 [2024-11-18 12:09:46.414840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.781 [2024-11-18 12:09:46.414875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.781 [2024-11-18 12:09:46.422873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.781 [2024-11-18 12:09:46.422909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.781 [2024-11-18 12:09:46.430865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.781 [2024-11-18 12:09:46.430899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.781 [2024-11-18 12:09:46.438825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.781 [2024-11-18 12:09:46.438871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.781 [2024-11-18 12:09:46.446855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.781 [2024-11-18 12:09:46.446888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.781 [2024-11-18 12:09:46.454885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.781 [2024-11-18 12:09:46.454918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.781 [2024-11-18 12:09:46.462870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.781 [2024-11-18 12:09:46.462906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.781 [2024-11-18 12:09:46.470873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.781 [2024-11-18 12:09:46.470908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.781 [2024-11-18 12:09:46.478842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.781 [2024-11-18 12:09:46.478888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.782 [2024-11-18 12:09:46.486871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.782 [2024-11-18 12:09:46.486904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.782 [2024-11-18 12:09:46.494871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.782 [2024-11-18 12:09:46.494904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.782 [2024-11-18 12:09:46.502825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.782 [2024-11-18 12:09:46.502873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.782 [2024-11-18 12:09:46.510895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.782 [2024-11-18 12:09:46.510931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.782 [2024-11-18 12:09:46.518946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.782 [2024-11-18 12:09:46.518985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.782 [2024-11-18 12:09:46.526827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.782 [2024-11-18 12:09:46.526875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.782 Running I/O for 5 seconds... 00:41:20.782 [2024-11-18 12:09:46.546179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.782 [2024-11-18 12:09:46.546220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.782 [2024-11-18 12:09:46.560589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.782 [2024-11-18 12:09:46.560626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.782 [2024-11-18 12:09:46.577829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.782 [2024-11-18 12:09:46.577870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.782 [2024-11-18 12:09:46.593461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.782 [2024-11-18 12:09:46.593518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.782 [2024-11-18 12:09:46.609294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.782 [2024-11-18 12:09:46.609333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.782 [2024-11-18 12:09:46.624572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.782 [2024-11-18 12:09:46.624606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.782 [2024-11-18 12:09:46.640017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.782 [2024-11-18 12:09:46.640056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.782 [2024-11-18 12:09:46.654751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.782 [2024-11-18 12:09:46.654798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.040 [2024-11-18 12:09:46.669685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.041 [2024-11-18 12:09:46.669735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.041 [2024-11-18 12:09:46.684749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.041 [2024-11-18 12:09:46.684783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.041 [2024-11-18 12:09:46.699803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.041 [2024-11-18 12:09:46.699853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.041 [2024-11-18 12:09:46.715246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.041 [2024-11-18 12:09:46.715285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.041 [2024-11-18 12:09:46.730264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.041 [2024-11-18 12:09:46.730305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.041 [2024-11-18 12:09:46.744822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.041 [2024-11-18 12:09:46.744876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.041 [2024-11-18 12:09:46.760635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.041 [2024-11-18 12:09:46.760668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.041 [2024-11-18 12:09:46.774878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.041 [2024-11-18 12:09:46.774917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.041 [2024-11-18 12:09:46.790938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.041 [2024-11-18 12:09:46.790978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.041 [2024-11-18 12:09:46.806499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.041 [2024-11-18 12:09:46.806551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.041 [2024-11-18 12:09:46.821005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.041 [2024-11-18 12:09:46.821045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.041 [2024-11-18 12:09:46.836679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.041 [2024-11-18 12:09:46.836713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.041 [2024-11-18 12:09:46.851978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.041 [2024-11-18 12:09:46.852017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.041 [2024-11-18 12:09:46.867014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.041 [2024-11-18 12:09:46.867052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.041 [2024-11-18 12:09:46.882484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.041 [2024-11-18 12:09:46.882555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.041 [2024-11-18 12:09:46.897381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.041 [2024-11-18 12:09:46.897419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.041 [2024-11-18 12:09:46.913079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.041 [2024-11-18 12:09:46.913118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.299 [2024-11-18 12:09:46.928771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.299 [2024-11-18 12:09:46.928807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.299 [2024-11-18 12:09:46.943925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.299 [2024-11-18 12:09:46.943965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.299 [2024-11-18 12:09:46.959007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.299 [2024-11-18 12:09:46.959048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.299 [2024-11-18 12:09:46.974697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.299 [2024-11-18 12:09:46.974732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.299 [2024-11-18 12:09:46.989756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.299 [2024-11-18 12:09:46.989812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.299 [2024-11-18 12:09:47.004874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.299 [2024-11-18 12:09:47.004922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.299 [2024-11-18 12:09:47.019917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.299 [2024-11-18 12:09:47.019956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.299 [2024-11-18 12:09:47.036007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.299 [2024-11-18 12:09:47.036046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.299 [2024-11-18 12:09:47.051637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.299 [2024-11-18 12:09:47.051677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.299 [2024-11-18 12:09:47.067617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.299 [2024-11-18 12:09:47.067651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.299 [2024-11-18 12:09:47.083173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.299 [2024-11-18 12:09:47.083212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.299 [2024-11-18 12:09:47.098663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.299 [2024-11-18 12:09:47.098703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.299 [2024-11-18 12:09:47.113757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.299 [2024-11-18 12:09:47.113821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.299 [2024-11-18 12:09:47.128752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.299 [2024-11-18 12:09:47.128805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.299 [2024-11-18 12:09:47.144794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.299 [2024-11-18 12:09:47.144829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.299 [2024-11-18 12:09:47.160604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.299 [2024-11-18 12:09:47.160639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.299 [2024-11-18 12:09:47.175184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.299 [2024-11-18 12:09:47.175233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.558 [2024-11-18 12:09:47.189758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.558 [2024-11-18 12:09:47.189813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.558 [2024-11-18 12:09:47.204211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.558 [2024-11-18 12:09:47.204249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.558 [2024-11-18 12:09:47.221132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.558 [2024-11-18 12:09:47.221172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.558 [2024-11-18 12:09:47.236505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.558 [2024-11-18 12:09:47.236559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.558 [2024-11-18 12:09:47.252170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.558 [2024-11-18 12:09:47.252208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.558 [2024-11-18 12:09:47.267426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.558 [2024-11-18 12:09:47.267465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.558 [2024-11-18 12:09:47.282858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.558 [2024-11-18 12:09:47.282897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.558 [2024-11-18 12:09:47.298392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.558 [2024-11-18 12:09:47.298432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.558 [2024-11-18 12:09:47.312519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.558 [2024-11-18 12:09:47.312567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.558 [2024-11-18 12:09:47.328148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.558 [2024-11-18 12:09:47.328186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.558 [2024-11-18 12:09:47.343599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.558 [2024-11-18 12:09:47.343632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.558 [2024-11-18 12:09:47.358906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.558 [2024-11-18 12:09:47.358945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.558 [2024-11-18 12:09:47.374957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.558 [2024-11-18 12:09:47.374995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.558 [2024-11-18 12:09:47.390060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.558 [2024-11-18 12:09:47.390099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.558 [2024-11-18 12:09:47.404051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.558 [2024-11-18 12:09:47.404090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.558 [2024-11-18 12:09:47.420542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.558 [2024-11-18 12:09:47.420576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.558 [2024-11-18 12:09:47.435599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.558 [2024-11-18 12:09:47.435632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.817 [2024-11-18 12:09:47.450064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.817 [2024-11-18 12:09:47.450103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.817 [2024-11-18 12:09:47.463578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.817 [2024-11-18 12:09:47.463613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.817 [2024-11-18 12:09:47.478951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.817 [2024-11-18 12:09:47.478990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.817 [2024-11-18 12:09:47.493124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.817 [2024-11-18 12:09:47.493163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.817 [2024-11-18 12:09:47.509040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.817 [2024-11-18 12:09:47.509079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.817 [2024-11-18 12:09:47.524841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.817 [2024-11-18 12:09:47.524881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.817 8256.00 IOPS, 64.50 MiB/s [2024-11-18T11:09:47.702Z] [2024-11-18 12:09:47.540389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.817 [2024-11-18 12:09:47.540427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.817 [2024-11-18 12:09:47.555505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.817 [2024-11-18 12:09:47.555555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.817 [2024-11-18 12:09:47.569995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.817 [2024-11-18 12:09:47.570034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.817 [2024-11-18 12:09:47.584641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.817 [2024-11-18 12:09:47.584673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.817 [2024-11-18 12:09:47.599570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.817 [2024-11-18 12:09:47.599603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.817 [2024-11-18 12:09:47.614813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.817 [2024-11-18 12:09:47.614866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.817 [2024-11-18 12:09:47.630403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.817 [2024-11-18 12:09:47.630442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.817 [2024-11-18 12:09:47.645545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.817 [2024-11-18 12:09:47.645578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.817 [2024-11-18 12:09:47.660982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.817 [2024-11-18 12:09:47.661021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.817 [2024-11-18 12:09:47.676352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.817 [2024-11-18 12:09:47.676392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.817 [2024-11-18 12:09:47.690802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.817 [2024-11-18 12:09:47.690852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.075 [2024-11-18 12:09:47.705950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.075 [2024-11-18 12:09:47.705983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.075 [2024-11-18 12:09:47.721066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.075 [2024-11-18 12:09:47.721106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.075 [2024-11-18 12:09:47.736253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.075 [2024-11-18 12:09:47.736292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.075 [2024-11-18 12:09:47.751744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.075 [2024-11-18 12:09:47.751776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.075 [2024-11-18 12:09:47.767061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.075 [2024-11-18 12:09:47.767099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.075 [2024-11-18 12:09:47.782005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.075 [2024-11-18 12:09:47.782044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.075 [2024-11-18 12:09:47.797456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.075 [2024-11-18 12:09:47.797504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.076 [2024-11-18 12:09:47.812565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.076 [2024-11-18 12:09:47.812598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.076 [2024-11-18 12:09:47.827422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.076 [2024-11-18 12:09:47.827460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.076 [2024-11-18 12:09:47.843034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.076 [2024-11-18 12:09:47.843074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.076 [2024-11-18 12:09:47.858568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.076 [2024-11-18 12:09:47.858603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.076 [2024-11-18 12:09:47.873672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.076 [2024-11-18 12:09:47.873707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.076 [2024-11-18 12:09:47.889068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.076 [2024-11-18 12:09:47.889106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.076 [2024-11-18 12:09:47.904182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.076 [2024-11-18 12:09:47.904221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.076 [2024-11-18 12:09:47.919268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.076 [2024-11-18 12:09:47.919307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.076 [2024-11-18 12:09:47.934943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.076 [2024-11-18 12:09:47.934983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.076 [2024-11-18 12:09:47.950526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.076 [2024-11-18 12:09:47.950576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.334 [2024-11-18 12:09:47.965399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.334 [2024-11-18 12:09:47.965440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.334 [2024-11-18 12:09:47.980617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.334 [2024-11-18 12:09:47.980650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.334 [2024-11-18 12:09:47.995596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.334 [2024-11-18 12:09:47.995631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.334 [2024-11-18 12:09:48.010911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.334 [2024-11-18 12:09:48.010949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.334 [2024-11-18 12:09:48.026343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.334 [2024-11-18 12:09:48.026383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.334 [2024-11-18 12:09:48.040912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.334 [2024-11-18 12:09:48.040951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.334 [2024-11-18 12:09:48.055538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.334 [2024-11-18 12:09:48.055587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.334 [2024-11-18 12:09:48.070660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.334 [2024-11-18 12:09:48.070695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.334 [2024-11-18 12:09:48.085675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.334 [2024-11-18 12:09:48.085708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.334 [2024-11-18 12:09:48.100389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.334 [2024-11-18 12:09:48.100428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.334 [2024-11-18 12:09:48.115978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.334 [2024-11-18 12:09:48.116018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.334 [2024-11-18 12:09:48.131627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.334 [2024-11-18 12:09:48.131660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.334 [2024-11-18 12:09:48.146871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.334 [2024-11-18 12:09:48.146912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.334 [2024-11-18 12:09:48.162882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.334 [2024-11-18 12:09:48.162922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.334 [2024-11-18 12:09:48.178424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.334 [2024-11-18 12:09:48.178462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.334 [2024-11-18 12:09:48.194440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.334 [2024-11-18 12:09:48.194499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.334 [2024-11-18 12:09:48.210350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.334 [2024-11-18 12:09:48.210390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.592 [2024-11-18 12:09:48.225707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.593 [2024-11-18 12:09:48.225740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.593 [2024-11-18 12:09:48.240974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.593 [2024-11-18 12:09:48.241014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.593 [2024-11-18 12:09:48.256432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.593 [2024-11-18 12:09:48.256470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.593 [2024-11-18 12:09:48.272599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.593 [2024-11-18 12:09:48.272632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.593 [2024-11-18 12:09:48.288614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.593 [2024-11-18 12:09:48.288647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.593 [2024-11-18 12:09:48.302848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.593 [2024-11-18 12:09:48.302888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.593 [2024-11-18 12:09:48.319153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.593 [2024-11-18 12:09:48.319202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.593 [2024-11-18 12:09:48.334613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.593 [2024-11-18 12:09:48.334648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.593 [2024-11-18 12:09:48.350285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.593 [2024-11-18 12:09:48.350324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.593 [2024-11-18 12:09:48.365475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.593 [2024-11-18 12:09:48.365539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.593 [2024-11-18 12:09:48.381335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.593 [2024-11-18 12:09:48.381374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.593 [2024-11-18 12:09:48.396745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.593 [2024-11-18 12:09:48.396792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.593 [2024-11-18 12:09:48.412101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.593 [2024-11-18 12:09:48.412140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.593 [2024-11-18 12:09:48.427423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.593 [2024-11-18 12:09:48.427462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.593 [2024-11-18 12:09:48.442438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.593 [2024-11-18 12:09:48.442477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.593 [2024-11-18 12:09:48.457137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.593 [2024-11-18 12:09:48.457175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.593 [2024-11-18 12:09:48.473794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.593 [2024-11-18 12:09:48.473827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.851 [2024-11-18 12:09:48.489054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.851 [2024-11-18 12:09:48.489093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.851 [2024-11-18 12:09:48.504126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.851 [2024-11-18 12:09:48.504164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.851 [2024-11-18 12:09:48.519280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.851 [2024-11-18 12:09:48.519318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.851 [2024-11-18 12:09:48.534178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.851 [2024-11-18 12:09:48.534217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.851 8276.00 IOPS, 64.66 MiB/s [2024-11-18T11:09:48.736Z] [2024-11-18 12:09:48.548354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.851 [2024-11-18 12:09:48.548393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.851 [2024-11-18 12:09:48.564873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.851 [2024-11-18 12:09:48.564912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.851 [2024-11-18 12:09:48.580632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.851 [2024-11-18 12:09:48.580665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.851 [2024-11-18 12:09:48.595803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.851 [2024-11-18 12:09:48.595852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.851 [2024-11-18 12:09:48.611429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.851 [2024-11-18 12:09:48.611476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.851 [2024-11-18 12:09:48.626243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.851 [2024-11-18 12:09:48.626281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.851 [2024-11-18 12:09:48.640780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.851 [2024-11-18 12:09:48.640812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.851 [2024-11-18 12:09:48.657050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.851 [2024-11-18 12:09:48.657089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.851 [2024-11-18 12:09:48.671959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.851 [2024-11-18 12:09:48.671998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.851 [2024-11-18 12:09:48.687355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.851 [2024-11-18 12:09:48.687392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.851 [2024-11-18 12:09:48.703295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.851 [2024-11-18 12:09:48.703334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.851 [2024-11-18 12:09:48.718871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.851 [2024-11-18 12:09:48.718912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.851 [2024-11-18 12:09:48.733895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.851 [2024-11-18 12:09:48.733928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.110 [2024-11-18 12:09:48.749028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.110 [2024-11-18 12:09:48.749066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.110 [2024-11-18 12:09:48.763669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.110 [2024-11-18 12:09:48.763703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.110 [2024-11-18 12:09:48.778872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.110 [2024-11-18 12:09:48.778912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.110 [2024-11-18 12:09:48.793720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.110 [2024-11-18 12:09:48.793754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.110 [2024-11-18 12:09:48.807670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.110 [2024-11-18 12:09:48.807705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.110 [2024-11-18 12:09:48.824080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.110 [2024-11-18 12:09:48.824119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.110 [2024-11-18 12:09:48.839559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.110 [2024-11-18 12:09:48.839594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.110 [2024-11-18 12:09:48.855590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.110 [2024-11-18 12:09:48.855625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.110 [2024-11-18 12:09:48.870888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.110 [2024-11-18 12:09:48.870928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.110 [2024-11-18 12:09:48.886192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.110 [2024-11-18 12:09:48.886230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.110 [2024-11-18 12:09:48.902597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.110 [2024-11-18 12:09:48.902643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.110 [2024-11-18 12:09:48.918300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.110 [2024-11-18 12:09:48.918340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.110 [2024-11-18 12:09:48.933621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.110 [2024-11-18 12:09:48.933653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.110 [2024-11-18 12:09:48.948826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.110 [2024-11-18 12:09:48.948879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.110 [2024-11-18 12:09:48.964292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.110 [2024-11-18 12:09:48.964331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.110 [2024-11-18 12:09:48.978926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.110 [2024-11-18 12:09:48.978965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.110 [2024-11-18 12:09:48.992830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.110 [2024-11-18 12:09:48.992878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.368 [2024-11-18 12:09:49.008562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.368 [2024-11-18 12:09:49.008597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.368 [2024-11-18 12:09:49.023384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.368 [2024-11-18 12:09:49.023423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.368 [2024-11-18 12:09:49.039066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.368 [2024-11-18 12:09:49.039105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.368 [2024-11-18 12:09:49.054245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.368 [2024-11-18 12:09:49.054283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.368 [2024-11-18 12:09:49.069434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.368 [2024-11-18 12:09:49.069473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.368 [2024-11-18 12:09:49.085199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.368 [2024-11-18 12:09:49.085238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.368 [2024-11-18 12:09:49.100067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.368 [2024-11-18 12:09:49.100105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.368 [2024-11-18 12:09:49.115295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.368 [2024-11-18 12:09:49.115333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.368 [2024-11-18 12:09:49.130301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.368 [2024-11-18 12:09:49.130340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.368 [2024-11-18 12:09:49.146242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.368 [2024-11-18 12:09:49.146280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.368 [2024-11-18 12:09:49.161882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.368 [2024-11-18 12:09:49.161921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.368 [2024-11-18 12:09:49.176943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.368 [2024-11-18 12:09:49.176982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.368 [2024-11-18 12:09:49.191967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.368 [2024-11-18 12:09:49.192005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.368 [2024-11-18 12:09:49.206954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.368 [2024-11-18 12:09:49.206993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.368 [2024-11-18 12:09:49.221455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.368 [2024-11-18 12:09:49.221504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.368 [2024-11-18 12:09:49.236292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.368 [2024-11-18 12:09:49.236332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.368 [2024-11-18 12:09:49.252916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.368 [2024-11-18 12:09:49.252964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.626 [2024-11-18 12:09:49.267790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.626 [2024-11-18 12:09:49.267822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.626 [2024-11-18 12:09:49.282705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.626 [2024-11-18 12:09:49.282740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.626 [2024-11-18 12:09:49.298416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.626 [2024-11-18 12:09:49.298456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.626 [2024-11-18 12:09:49.313326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.626 [2024-11-18 12:09:49.313364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.626 [2024-11-18 12:09:49.327873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.626 [2024-11-18 12:09:49.327913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.626 [2024-11-18 12:09:49.343717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.626 [2024-11-18 12:09:49.343750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.626 [2024-11-18 12:09:49.358508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.626 [2024-11-18 12:09:49.358560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.626 [2024-11-18 12:09:49.372235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.626 [2024-11-18 12:09:49.372273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.626 [2024-11-18 12:09:49.388896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.626 [2024-11-18 12:09:49.388933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.626 [2024-11-18 12:09:49.404727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.626 [2024-11-18 12:09:49.404779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.626 [2024-11-18 12:09:49.420231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.626 [2024-11-18 12:09:49.420270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.626 [2024-11-18 12:09:49.436116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.626 [2024-11-18 12:09:49.436156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.626 [2024-11-18 12:09:49.450080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.626 [2024-11-18 12:09:49.450114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.627 [2024-11-18 12:09:49.465082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.627 [2024-11-18 12:09:49.465114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.627 [2024-11-18 12:09:49.479121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.627 [2024-11-18 12:09:49.479154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.627 [2024-11-18 12:09:49.493672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.627 [2024-11-18 12:09:49.493709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.627 [2024-11-18 12:09:49.508295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.627 [2024-11-18 12:09:49.508333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.885 [2024-11-18 12:09:49.522971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.885 [2024-11-18 12:09:49.523005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.885 [2024-11-18 12:09:49.537702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.885 [2024-11-18 12:09:49.537748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.885 8311.33 IOPS, 64.93 MiB/s [2024-11-18T11:09:49.770Z] [2024-11-18 12:09:49.552281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.885 [2024-11-18 12:09:49.552314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.885 [2024-11-18 12:09:49.567486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.885 [2024-11-18 12:09:49.567529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.885 [2024-11-18 12:09:49.588409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.885 [2024-11-18 12:09:49.588442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.885 [2024-11-18 12:09:49.600929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.885 [2024-11-18 12:09:49.600962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.885 [2024-11-18 12:09:49.616630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.885 [2024-11-18 12:09:49.616664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.885 [2024-11-18 12:09:49.630487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.885 [2024-11-18 12:09:49.630530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.885 [2024-11-18 12:09:49.645105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.885 [2024-11-18 12:09:49.645138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.885 [2024-11-18 12:09:49.659321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.885 [2024-11-18 12:09:49.659354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.885 [2024-11-18 12:09:49.675717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.885 [2024-11-18 12:09:49.675754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.885 [2024-11-18 12:09:49.687600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.885 [2024-11-18 12:09:49.687635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.885 [2024-11-18 12:09:49.706945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.885 [2024-11-18 12:09:49.706978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.885 [2024-11-18 12:09:49.719460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.885 [2024-11-18 12:09:49.719519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.885 [2024-11-18 12:09:49.738909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.885 [2024-11-18 12:09:49.738943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.885 [2024-11-18 12:09:49.752354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.885 [2024-11-18 12:09:49.752398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.885 [2024-11-18 12:09:49.768361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.885 [2024-11-18 12:09:49.768394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.144 [2024-11-18 12:09:49.782720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.144 [2024-11-18 12:09:49.782755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.144 [2024-11-18 12:09:49.797349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.144 [2024-11-18 12:09:49.797382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.144 [2024-11-18 12:09:49.811985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.144 [2024-11-18 12:09:49.812021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.144 [2024-11-18 12:09:49.830379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.144 [2024-11-18 12:09:49.830428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.144 [2024-11-18 12:09:49.843209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.144 [2024-11-18 12:09:49.843240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.144 [2024-11-18 12:09:49.858892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.144 [2024-11-18 12:09:49.858926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.144 [2024-11-18 12:09:49.872501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.144 [2024-11-18 12:09:49.872547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.144 [2024-11-18 12:09:49.887439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.144 [2024-11-18 12:09:49.887486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.144 [2024-11-18 12:09:49.904344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.144 [2024-11-18 12:09:49.904396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.144 [2024-11-18 12:09:49.916980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.144 [2024-11-18 12:09:49.917013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.144 [2024-11-18 12:09:49.932088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.144 [2024-11-18 12:09:49.932122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.144 [2024-11-18 12:09:49.944647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.144 [2024-11-18 12:09:49.944682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.144 [2024-11-18 12:09:49.959988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.144 [2024-11-18 12:09:49.960023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.144 [2024-11-18 12:09:49.973991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.144 [2024-11-18 12:09:49.974024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.144 [2024-11-18 12:09:49.988509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.144 [2024-11-18 12:09:49.988545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.144 [2024-11-18 12:09:50.002956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.144 [2024-11-18 12:09:50.002992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.144 [2024-11-18 12:09:50.017467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.144 [2024-11-18 12:09:50.017524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.402 [2024-11-18 12:09:50.031969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.402 [2024-11-18 12:09:50.032034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.402 [2024-11-18 12:09:50.046301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.402 [2024-11-18 12:09:50.046363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.402 [2024-11-18 12:09:50.062266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.402 [2024-11-18 12:09:50.062301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.402 [2024-11-18 12:09:50.076899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.402 [2024-11-18 12:09:50.076934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.402 [2024-11-18 12:09:50.091674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.402 [2024-11-18 12:09:50.091711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.402 [2024-11-18 12:09:50.105724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.402 [2024-11-18 12:09:50.105760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.402 [2024-11-18 12:09:50.120236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.402 [2024-11-18 12:09:50.120270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.402 [2024-11-18 12:09:50.134455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.402 [2024-11-18 12:09:50.134515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.403 [2024-11-18 12:09:50.148461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.403 [2024-11-18 12:09:50.148519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.403 [2024-11-18 12:09:50.164091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.403 [2024-11-18 12:09:50.164143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.403 [2024-11-18 12:09:50.180286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.403 [2024-11-18 12:09:50.180322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.403 [2024-11-18 12:09:50.192953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.403 [2024-11-18 12:09:50.192989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.403 [2024-11-18 12:09:50.208384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.403 [2024-11-18 12:09:50.208418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.403 [2024-11-18 12:09:50.222262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.403 [2024-11-18 12:09:50.222296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.403 [2024-11-18 12:09:50.236488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.403 [2024-11-18 12:09:50.236533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.403 [2024-11-18 12:09:50.250622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.403 [2024-11-18 12:09:50.250657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.403 [2024-11-18 12:09:50.264436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.403 [2024-11-18 12:09:50.264484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.403 [2024-11-18 12:09:50.278629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.403 [2024-11-18 12:09:50.278663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.661 [2024-11-18 12:09:50.293171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.661 [2024-11-18 12:09:50.293205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.661 [2024-11-18 12:09:50.307239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.661 [2024-11-18 12:09:50.307281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.661 [2024-11-18 12:09:50.321197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.661 [2024-11-18 12:09:50.321232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.661 [2024-11-18 12:09:50.335615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.661 [2024-11-18 12:09:50.335650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.661 [2024-11-18 12:09:50.354915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.661 [2024-11-18 12:09:50.354965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.661 [2024-11-18 12:09:50.367111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.661 [2024-11-18 12:09:50.367144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.661 [2024-11-18 12:09:50.382804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.661 [2024-11-18 12:09:50.382838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.661 [2024-11-18 12:09:50.397076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.661 [2024-11-18 12:09:50.397111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.661 [2024-11-18 12:09:50.411897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.661 [2024-11-18 12:09:50.411931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.661 [2024-11-18 12:09:50.428680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.661 [2024-11-18 12:09:50.428717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.661 [2024-11-18 12:09:50.441520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.661 [2024-11-18 12:09:50.441556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.661 [2024-11-18 12:09:50.457637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.661 [2024-11-18 12:09:50.457674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.661 [2024-11-18 12:09:50.471564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.661 [2024-11-18 12:09:50.471599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.661 [2024-11-18 12:09:50.484743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.661 [2024-11-18 12:09:50.484798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.661 [2024-11-18 12:09:50.500043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.661 [2024-11-18 12:09:50.500075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.661 [2024-11-18 12:09:50.514678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.661 [2024-11-18 12:09:50.514715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.661 [2024-11-18 12:09:50.529158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.661 [2024-11-18 12:09:50.529191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.661 8449.50 IOPS, 66.01 MiB/s [2024-11-18T11:09:50.546Z] [2024-11-18 12:09:50.543610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.661 [2024-11-18 12:09:50.543646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.919 [2024-11-18 12:09:50.558046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.919 [2024-11-18 12:09:50.558079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.919 [2024-11-18 12:09:50.572729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.920 [2024-11-18 12:09:50.572780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.920 [2024-11-18 12:09:50.587181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.920 [2024-11-18 12:09:50.587214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.920 [2024-11-18 12:09:50.600728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.920 [2024-11-18 12:09:50.600777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.920 [2024-11-18 12:09:50.615182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.920 [2024-11-18 12:09:50.615214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.920 [2024-11-18 12:09:50.629608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.920 [2024-11-18 12:09:50.629645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.920 [2024-11-18 12:09:50.643890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.920 [2024-11-18 12:09:50.643922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.920 [2024-11-18 12:09:50.661614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.920 [2024-11-18 12:09:50.661650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.920 [2024-11-18 12:09:50.674389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.920 [2024-11-18 12:09:50.674421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.920 [2024-11-18 12:09:50.690750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.920 [2024-11-18 12:09:50.690801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.920 [2024-11-18 12:09:50.705585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.920 [2024-11-18 12:09:50.705621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.920 [2024-11-18 12:09:50.719967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.920 [2024-11-18 12:09:50.720000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.920 [2024-11-18 12:09:50.734338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.920 [2024-11-18 12:09:50.734371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.920 [2024-11-18 12:09:50.748612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.920 [2024-11-18 12:09:50.748648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.920 [2024-11-18 12:09:50.762594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.920 [2024-11-18 12:09:50.762630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.920 [2024-11-18 12:09:50.776996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.920 [2024-11-18 12:09:50.777028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.920 [2024-11-18 12:09:50.791368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.920 [2024-11-18 12:09:50.791401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.178 [2024-11-18 12:09:50.809644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.178 [2024-11-18 12:09:50.809681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.178 [2024-11-18 12:09:50.822254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.178 [2024-11-18 12:09:50.822287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.178 [2024-11-18 12:09:50.837800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.178 [2024-11-18 12:09:50.837849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.178 [2024-11-18 12:09:50.851611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.178 [2024-11-18 12:09:50.851646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.178 [2024-11-18 12:09:50.868839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.178 [2024-11-18 12:09:50.868886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.178 [2024-11-18 12:09:50.880978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.178 [2024-11-18 12:09:50.881012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.178 [2024-11-18 12:09:50.897139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.178 [2024-11-18 12:09:50.897174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.178 [2024-11-18 12:09:50.911990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.178 [2024-11-18 12:09:50.912023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.178 [2024-11-18 12:09:50.926796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.178 [2024-11-18 12:09:50.926844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.178 [2024-11-18 12:09:50.941074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.178 [2024-11-18 12:09:50.941109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.178 [2024-11-18 12:09:50.953863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.178 [2024-11-18 12:09:50.953895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.178 [2024-11-18 12:09:50.968907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.178 [2024-11-18 12:09:50.968940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.178 [2024-11-18 12:09:50.982740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.178 [2024-11-18 12:09:50.982793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.178 [2024-11-18 12:09:50.997206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.178 [2024-11-18 12:09:50.997239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.178 [2024-11-18 12:09:51.011622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.178 [2024-11-18 12:09:51.011657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.178 [2024-11-18 12:09:51.026332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.178 [2024-11-18 12:09:51.026365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.178 [2024-11-18 12:09:51.040051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.178 [2024-11-18 12:09:51.040084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.178 [2024-11-18 12:09:51.054451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.178 [2024-11-18 12:09:51.054507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.439 [2024-11-18 12:09:51.068478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.439 [2024-11-18 12:09:51.068524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.439 [2024-11-18 12:09:51.082795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.439 [2024-11-18 12:09:51.082846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.439 [2024-11-18 12:09:51.097564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.439 [2024-11-18 12:09:51.097600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.439 [2024-11-18 12:09:51.113107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.439 [2024-11-18 12:09:51.113146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.439 [2024-11-18 12:09:51.127948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.439 [2024-11-18 12:09:51.127987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.439 [2024-11-18 12:09:51.143069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.439 [2024-11-18 12:09:51.143107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.439 [2024-11-18 12:09:51.158108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.439 [2024-11-18 12:09:51.158148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.439 [2024-11-18 12:09:51.174369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.439 [2024-11-18 12:09:51.174408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.439 [2024-11-18 12:09:51.189381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.439 [2024-11-18 12:09:51.189421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.439 [2024-11-18 12:09:51.204755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.439 [2024-11-18 12:09:51.204802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.439 [2024-11-18 12:09:51.219759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.439 [2024-11-18 12:09:51.219812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.439 [2024-11-18 12:09:51.235699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.439 [2024-11-18 12:09:51.235734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.439 [2024-11-18 12:09:51.251072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.439 [2024-11-18 12:09:51.251112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.439 [2024-11-18 12:09:51.266345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.439 [2024-11-18 12:09:51.266385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.439 [2024-11-18 12:09:51.281677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.439 [2024-11-18 12:09:51.281712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.439 [2024-11-18 12:09:51.296850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.439 [2024-11-18 12:09:51.296889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.439 [2024-11-18 12:09:51.311992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.439 [2024-11-18 12:09:51.312032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.697 [2024-11-18 12:09:51.326955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.698 [2024-11-18 12:09:51.326995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.698 [2024-11-18 12:09:51.341677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.698 [2024-11-18 12:09:51.341712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.698 [2024-11-18 12:09:51.357104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.698 [2024-11-18 12:09:51.357144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.698 [2024-11-18 12:09:51.370708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.698 [2024-11-18 12:09:51.370742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.698 [2024-11-18 12:09:51.387983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.698 [2024-11-18 12:09:51.388023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.698 [2024-11-18 12:09:51.403604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.698 [2024-11-18 12:09:51.403636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.698 [2024-11-18 12:09:51.418871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.698 [2024-11-18 12:09:51.418910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.698 [2024-11-18 12:09:51.434236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.698 [2024-11-18 12:09:51.434276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.698 [2024-11-18 12:09:51.449473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.698 [2024-11-18 12:09:51.449537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.698 [2024-11-18 12:09:51.464872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.698 [2024-11-18 12:09:51.464911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.698 [2024-11-18 12:09:51.479711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.698 [2024-11-18 12:09:51.479743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.698 [2024-11-18 12:09:51.496585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.698 [2024-11-18 12:09:51.496618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.698 [2024-11-18 12:09:51.511892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.698 [2024-11-18 12:09:51.511931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.698 [2024-11-18 12:09:51.527587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.698 [2024-11-18 12:09:51.527619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.698 [2024-11-18 12:09:51.542650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.698 [2024-11-18 12:09:51.542688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.698 8479.40 IOPS, 66.25 MiB/s [2024-11-18T11:09:51.583Z] [2024-11-18 12:09:51.555790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.698 [2024-11-18 12:09:51.555841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.698 00:41:25.698 Latency(us) 00:41:25.698 [2024-11-18T11:09:51.583Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:25.698 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:41:25.698 Nvme1n1 : 5.01 8483.67 66.28 0.00 0.00 15062.19 5946.79 25437.68 00:41:25.698 [2024-11-18T11:09:51.583Z] =================================================================================================================== 00:41:25.698 [2024-11-18T11:09:51.583Z] Total : 8483.67 66.28 0.00 0.00 15062.19 5946.79 25437.68 00:41:25.698 [2024-11-18 12:09:51.562875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.698 [2024-11-18 12:09:51.562912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.698 [2024-11-18 12:09:51.570867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.698 [2024-11-18 12:09:51.570905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.698 [2024-11-18 12:09:51.578831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.698 [2024-11-18 12:09:51.578882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.957 [2024-11-18 12:09:51.586871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.957 [2024-11-18 12:09:51.586916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.957 [2024-11-18 12:09:51.594831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.957 [2024-11-18 12:09:51.594881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.957 [2024-11-18 12:09:51.602903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.957 [2024-11-18 12:09:51.602938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.957 [2024-11-18 12:09:51.610975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.957 [2024-11-18 12:09:51.611044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.957 [2024-11-18 12:09:51.618940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.957 [2024-11-18 12:09:51.619000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.957 [2024-11-18 12:09:51.626899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.957 [2024-11-18 12:09:51.626934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.957 [2024-11-18 12:09:51.634861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.957 [2024-11-18 12:09:51.634895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.957 [2024-11-18 12:09:51.642830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.957 [2024-11-18 12:09:51.642862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.957 [2024-11-18 12:09:51.650859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.957 [2024-11-18 12:09:51.650892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.957 [2024-11-18 12:09:51.658834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.957 [2024-11-18 12:09:51.658866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.957 [2024-11-18 12:09:51.666867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.957 [2024-11-18 12:09:51.666901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.957 [2024-11-18 12:09:51.674873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.957 [2024-11-18 12:09:51.674906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.957 [2024-11-18 12:09:51.682855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.957 [2024-11-18 12:09:51.682888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.957 [2024-11-18 12:09:51.690869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.957 [2024-11-18 12:09:51.690903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.957 [2024-11-18 12:09:51.698858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.957 [2024-11-18 12:09:51.698892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.957 [2024-11-18 12:09:51.706921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.957 [2024-11-18 12:09:51.706979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.957 [2024-11-18 12:09:51.714953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.957 [2024-11-18 12:09:51.715011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.957 [2024-11-18 12:09:51.722827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.957 [2024-11-18 12:09:51.722875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.957 [2024-11-18 12:09:51.730864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.957 [2024-11-18 12:09:51.730898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.957 [2024-11-18 12:09:51.738863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.957 [2024-11-18 12:09:51.738896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.957 [2024-11-18 12:09:51.746881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.957 [2024-11-18 12:09:51.746915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.957 [2024-11-18 12:09:51.754858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.957 [2024-11-18 12:09:51.754892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.957 [2024-11-18 12:09:51.762861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.957 [2024-11-18 12:09:51.762903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.957 [2024-11-18 12:09:51.770868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.957 [2024-11-18 12:09:51.770901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.957 [2024-11-18 12:09:51.778871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.957 [2024-11-18 12:09:51.778904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.957 [2024-11-18 12:09:51.786820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.957 [2024-11-18 12:09:51.786865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.957 [2024-11-18 12:09:51.794865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.957 [2024-11-18 12:09:51.794899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.957 [2024-11-18 12:09:51.802862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.957 [2024-11-18 12:09:51.802896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.957 [2024-11-18 12:09:51.810831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.957 [2024-11-18 12:09:51.810864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.957 [2024-11-18 12:09:51.818866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.957 [2024-11-18 12:09:51.818899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.957 [2024-11-18 12:09:51.826866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.957 [2024-11-18 12:09:51.826900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.957 [2024-11-18 12:09:51.834831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.957 [2024-11-18 12:09:51.834863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.219 [2024-11-18 12:09:51.842865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.219 [2024-11-18 12:09:51.842893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.219 [2024-11-18 12:09:51.854831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.219 [2024-11-18 12:09:51.854859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.219 [2024-11-18 12:09:51.862871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.219 [2024-11-18 12:09:51.862904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.219 [2024-11-18 12:09:51.871166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.219 [2024-11-18 12:09:51.871231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.219 [2024-11-18 12:09:51.878947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.219 [2024-11-18 12:09:51.879004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.219 [2024-11-18 12:09:51.886871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.219 [2024-11-18 12:09:51.886904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.219 [2024-11-18 12:09:51.894867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.219 [2024-11-18 12:09:51.894902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.219 [2024-11-18 12:09:51.902822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.219 [2024-11-18 12:09:51.902864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.219 [2024-11-18 12:09:51.910860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.219 [2024-11-18 12:09:51.910893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.219 [2024-11-18 12:09:51.918849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.219 [2024-11-18 12:09:51.918890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.219 [2024-11-18 12:09:51.926863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.219 [2024-11-18 12:09:51.926897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.219 [2024-11-18 12:09:51.934986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.219 [2024-11-18 12:09:51.935048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.219 [2024-11-18 12:09:51.942927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.219 [2024-11-18 12:09:51.942988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.219 [2024-11-18 12:09:51.950991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.219 [2024-11-18 12:09:51.951056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.219 [2024-11-18 12:09:51.958900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.219 [2024-11-18 12:09:51.958941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.219 [2024-11-18 12:09:51.966845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.219 [2024-11-18 12:09:51.966876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.219 [2024-11-18 12:09:51.974859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.219 [2024-11-18 12:09:51.974891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.219 [2024-11-18 12:09:51.982822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.219 [2024-11-18 12:09:51.982868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.219 [2024-11-18 12:09:51.990855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.219 [2024-11-18 12:09:51.990888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.219 [2024-11-18 12:09:51.998856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.219 [2024-11-18 12:09:51.998889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.219 [2024-11-18 12:09:52.006828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.219 [2024-11-18 12:09:52.006859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.219 [2024-11-18 12:09:52.014853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.219 [2024-11-18 12:09:52.014886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.219 [2024-11-18 12:09:52.022871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.219 [2024-11-18 12:09:52.022905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.219 [2024-11-18 12:09:52.030827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.219 [2024-11-18 12:09:52.030872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.219 [2024-11-18 12:09:52.038863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.219 [2024-11-18 12:09:52.038896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.219 [2024-11-18 12:09:52.046818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.219 [2024-11-18 12:09:52.046861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.219 [2024-11-18 12:09:52.054847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.219 [2024-11-18 12:09:52.054880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.219 [2024-11-18 12:09:52.062855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.219 [2024-11-18 12:09:52.062887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.219 [2024-11-18 12:09:52.070854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.219 [2024-11-18 12:09:52.070886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.219 [2024-11-18 12:09:52.078872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.219 [2024-11-18 12:09:52.078905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.219 [2024-11-18 12:09:52.086854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.219 [2024-11-18 12:09:52.086887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.220 [2024-11-18 12:09:52.094819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.220 [2024-11-18 12:09:52.094864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.220 [2024-11-18 12:09:52.102860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.220 [2024-11-18 12:09:52.102890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.514 [2024-11-18 12:09:52.110857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.514 [2024-11-18 12:09:52.110901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.514 [2024-11-18 12:09:52.118958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.514 [2024-11-18 12:09:52.119017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.514 [2024-11-18 12:09:52.126877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.514 [2024-11-18 12:09:52.126914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.514 [2024-11-18 12:09:52.134833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.514 [2024-11-18 12:09:52.134881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.514 [2024-11-18 12:09:52.142873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.514 [2024-11-18 12:09:52.142906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.514 [2024-11-18 12:09:52.150871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.514 [2024-11-18 12:09:52.150903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.514 [2024-11-18 12:09:52.158827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.514 [2024-11-18 12:09:52.158871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.514 [2024-11-18 12:09:52.166881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.514 [2024-11-18 12:09:52.166914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.514 [2024-11-18 12:09:52.174825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.514 [2024-11-18 12:09:52.174872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.514 [2024-11-18 12:09:52.182863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.514 [2024-11-18 12:09:52.182896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.514 [2024-11-18 12:09:52.190870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.514 [2024-11-18 12:09:52.190902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.514 [2024-11-18 12:09:52.198825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.514 [2024-11-18 12:09:52.198870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.514 [2024-11-18 12:09:52.206857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.514 [2024-11-18 12:09:52.206891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.514 [2024-11-18 12:09:52.214858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.514 [2024-11-18 12:09:52.214891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.514 [2024-11-18 12:09:52.222833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.514 [2024-11-18 12:09:52.222875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.515 [2024-11-18 12:09:52.230851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.515 [2024-11-18 12:09:52.230881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.515 [2024-11-18 12:09:52.238941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.515 [2024-11-18 12:09:52.239003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.515 [2024-11-18 12:09:52.246886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.515 [2024-11-18 12:09:52.246919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.515 [2024-11-18 12:09:52.254856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.515 [2024-11-18 12:09:52.254889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.515 [2024-11-18 12:09:52.262860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.515 [2024-11-18 12:09:52.262893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.515 [2024-11-18 12:09:52.270854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.515 [2024-11-18 12:09:52.270888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.515 [2024-11-18 12:09:52.278852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.515 [2024-11-18 12:09:52.278884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.515 [2024-11-18 12:09:52.286866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.515 [2024-11-18 12:09:52.286898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.515 [2024-11-18 12:09:52.294865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.515 [2024-11-18 12:09:52.294898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.515 [2024-11-18 12:09:52.302828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.515 [2024-11-18 12:09:52.302875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.515 [2024-11-18 12:09:52.310864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.515 [2024-11-18 12:09:52.310898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.515 [2024-11-18 12:09:52.318837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.515 [2024-11-18 12:09:52.318865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.515 [2024-11-18 12:09:52.326931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.515 [2024-11-18 12:09:52.326994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.515 [2024-11-18 12:09:52.334890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.515 [2024-11-18 12:09:52.334924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.515 [2024-11-18 12:09:52.342867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.515 [2024-11-18 12:09:52.342899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.515 [2024-11-18 12:09:52.350834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.515 [2024-11-18 12:09:52.350866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.515 [2024-11-18 12:09:52.358878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.515 [2024-11-18 12:09:52.358910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.515 [2024-11-18 12:09:52.366821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.515 [2024-11-18 12:09:52.366866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.515 [2024-11-18 12:09:52.374861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.515 [2024-11-18 12:09:52.374893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.515 [2024-11-18 12:09:52.382862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.515 [2024-11-18 12:09:52.382895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.795 [2024-11-18 12:09:52.390836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.795 [2024-11-18 12:09:52.390867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.795 [2024-11-18 12:09:52.398865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.795 [2024-11-18 12:09:52.398899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.795 [2024-11-18 12:09:52.406882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.795 [2024-11-18 12:09:52.406916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.795 [2024-11-18 12:09:52.414826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.795 [2024-11-18 12:09:52.414871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.795 [2024-11-18 12:09:52.423088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.795 [2024-11-18 12:09:52.423125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.795 [2024-11-18 12:09:52.430835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.795 [2024-11-18 12:09:52.430881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.796 [2024-11-18 12:09:52.438869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.796 [2024-11-18 12:09:52.438902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.796 [2024-11-18 12:09:52.446865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.796 [2024-11-18 12:09:52.446898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.796 [2024-11-18 12:09:52.454857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.796 [2024-11-18 12:09:52.454890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3182861) - No such process 00:41:26.796 12:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3182861 00:41:26.796 12:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:26.796 12:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:26.796 12:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:26.796 12:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:26.796 12:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:41:26.796 12:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:26.796 12:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:26.796 delay0 00:41:26.796 12:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:26.796 12:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:41:26.796 12:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:26.796 12:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:26.796 12:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:26.796 12:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:41:26.796 [2024-11-18 12:09:52.639963] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:41:34.915 Initializing NVMe Controllers 00:41:34.915 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:34.915 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:41:34.915 Initialization complete. Launching workers. 00:41:34.915 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 228, failed: 18242 00:41:34.915 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 18321, failed to submit 149 00:41:34.915 success 18252, unsuccessful 69, failed 0 00:41:34.915 12:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:41:34.915 12:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:41:34.915 12:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:34.915 12:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:41:34.915 12:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:34.915 12:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:41:34.915 12:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:34.915 12:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:34.915 rmmod nvme_tcp 00:41:34.915 rmmod nvme_fabrics 00:41:34.915 rmmod nvme_keyring 00:41:34.915 12:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:34.915 12:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:41:34.915 12:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:41:34.915 12:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3181398 ']' 00:41:34.915 12:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3181398 00:41:34.915 12:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3181398 ']' 00:41:34.915 12:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3181398 00:41:34.915 12:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:41:34.915 12:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:34.915 12:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3181398 00:41:34.915 12:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:34.915 12:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:34.915 12:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3181398' 00:41:34.915 killing process with pid 3181398 00:41:34.915 12:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3181398 00:41:34.915 12:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3181398 00:41:35.174 12:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:35.174 12:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:35.174 12:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:35.174 12:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:41:35.174 12:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:41:35.174 12:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:35.174 12:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:41:35.174 12:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:35.174 12:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:35.175 12:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:35.175 12:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:35.175 12:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:37.717 00:41:37.717 real 0m32.861s 00:41:37.717 user 0m47.485s 00:41:37.717 sys 0m10.016s 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:37.717 ************************************ 00:41:37.717 END TEST nvmf_zcopy 00:41:37.717 ************************************ 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:37.717 ************************************ 00:41:37.717 START TEST nvmf_nmic 00:41:37.717 ************************************ 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:41:37.717 * Looking for test storage... 00:41:37.717 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:37.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:37.717 --rc genhtml_branch_coverage=1 00:41:37.717 --rc genhtml_function_coverage=1 00:41:37.717 --rc genhtml_legend=1 00:41:37.717 --rc geninfo_all_blocks=1 00:41:37.717 --rc geninfo_unexecuted_blocks=1 00:41:37.717 00:41:37.717 ' 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:37.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:37.717 --rc genhtml_branch_coverage=1 00:41:37.717 --rc genhtml_function_coverage=1 00:41:37.717 --rc genhtml_legend=1 00:41:37.717 --rc geninfo_all_blocks=1 00:41:37.717 --rc geninfo_unexecuted_blocks=1 00:41:37.717 00:41:37.717 ' 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:37.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:37.717 --rc genhtml_branch_coverage=1 00:41:37.717 --rc genhtml_function_coverage=1 00:41:37.717 --rc genhtml_legend=1 00:41:37.717 --rc geninfo_all_blocks=1 00:41:37.717 --rc geninfo_unexecuted_blocks=1 00:41:37.717 00:41:37.717 ' 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:37.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:37.717 --rc genhtml_branch_coverage=1 00:41:37.717 --rc genhtml_function_coverage=1 00:41:37.717 --rc genhtml_legend=1 00:41:37.717 --rc geninfo_all_blocks=1 00:41:37.717 --rc geninfo_unexecuted_blocks=1 00:41:37.717 00:41:37.717 ' 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:41:37.717 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:41:37.718 12:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:39.625 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:39.625 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:39.625 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:39.625 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:39.626 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:39.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:39.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:41:39.626 00:41:39.626 --- 10.0.0.2 ping statistics --- 00:41:39.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:39.626 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:39.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:39.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:41:39.626 00:41:39.626 --- 10.0.0.1 ping statistics --- 00:41:39.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:39.626 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3186616 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3186616 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3186616 ']' 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:39.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:39.626 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:39.885 [2024-11-18 12:10:05.536539] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:39.885 [2024-11-18 12:10:05.539273] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:41:39.885 [2024-11-18 12:10:05.539378] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:39.885 [2024-11-18 12:10:05.684739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:40.144 [2024-11-18 12:10:05.820569] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:40.144 [2024-11-18 12:10:05.820648] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:40.144 [2024-11-18 12:10:05.820678] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:40.144 [2024-11-18 12:10:05.820699] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:40.144 [2024-11-18 12:10:05.820721] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:40.144 [2024-11-18 12:10:05.823597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:40.145 [2024-11-18 12:10:05.823668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:40.145 [2024-11-18 12:10:05.823758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:40.145 [2024-11-18 12:10:05.823768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:40.405 [2024-11-18 12:10:06.197586] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:40.405 [2024-11-18 12:10:06.203849] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:40.405 [2024-11-18 12:10:06.204000] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:40.405 [2024-11-18 12:10:06.204829] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:40.405 [2024-11-18 12:10:06.205193] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:40.664 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:40.664 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:41:40.664 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:40.664 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:40.664 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:40.664 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:40.664 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:40.664 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:40.664 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:40.664 [2024-11-18 12:10:06.532869] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:40.664 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:40.664 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:40.664 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:40.664 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:40.924 Malloc0 00:41:40.924 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:40.924 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:40.924 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:40.924 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:40.924 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:40.924 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:40.924 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:40.924 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:40.924 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:40.924 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:40.924 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:40.924 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:40.924 [2024-11-18 12:10:06.649089] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:40.924 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:40.924 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:41:40.924 test case1: single bdev can't be used in multiple subsystems 00:41:40.924 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:41:40.924 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:40.924 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:40.924 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:40.924 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:40.924 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:40.924 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:40.924 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:40.924 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:41:40.924 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:41:40.924 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:40.924 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:40.924 [2024-11-18 12:10:06.672739] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:41:40.924 [2024-11-18 12:10:06.672804] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:41:40.924 [2024-11-18 12:10:06.672828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.924 request: 00:41:40.924 { 00:41:40.924 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:41:40.924 "namespace": { 00:41:40.924 "bdev_name": "Malloc0", 00:41:40.924 "no_auto_visible": false 00:41:40.924 }, 00:41:40.924 "method": "nvmf_subsystem_add_ns", 00:41:40.924 "req_id": 1 00:41:40.924 } 00:41:40.924 Got JSON-RPC error response 00:41:40.924 response: 00:41:40.924 { 00:41:40.924 "code": -32602, 00:41:40.924 "message": "Invalid parameters" 00:41:40.924 } 00:41:40.924 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:41:40.924 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:41:40.924 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:41:40.924 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:41:40.924 Adding namespace failed - expected result. 00:41:40.924 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:41:40.924 test case2: host connect to nvmf target in multiple paths 00:41:40.924 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:41:40.924 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:40.924 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:40.924 [2024-11-18 12:10:06.680865] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:41:40.924 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:40.924 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:41.183 12:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:41:41.441 12:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:41:41.441 12:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:41:41.441 12:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:41:41.441 12:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:41:41.441 12:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:41:43.347 12:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:41:43.347 12:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:41:43.347 12:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:41:43.347 12:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:41:43.347 12:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:41:43.347 12:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:41:43.347 12:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:41:43.347 [global] 00:41:43.347 thread=1 00:41:43.347 invalidate=1 00:41:43.347 rw=write 00:41:43.347 time_based=1 00:41:43.347 runtime=1 00:41:43.347 ioengine=libaio 00:41:43.347 direct=1 00:41:43.347 bs=4096 00:41:43.347 iodepth=1 00:41:43.347 norandommap=0 00:41:43.347 numjobs=1 00:41:43.347 00:41:43.347 verify_dump=1 00:41:43.347 verify_backlog=512 00:41:43.347 verify_state_save=0 00:41:43.347 do_verify=1 00:41:43.347 verify=crc32c-intel 00:41:43.347 [job0] 00:41:43.347 filename=/dev/nvme0n1 00:41:43.347 Could not set queue depth (nvme0n1) 00:41:43.606 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:43.606 fio-3.35 00:41:43.606 Starting 1 thread 00:41:44.984 00:41:44.984 job0: (groupid=0, jobs=1): err= 0: pid=3187130: Mon Nov 18 12:10:10 2024 00:41:44.984 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:41:44.984 slat (nsec): min=6885, max=72682, avg=15687.87, stdev=7902.19 00:41:44.984 clat (usec): min=263, max=1259, avg=362.06, stdev=88.78 00:41:44.984 lat (usec): min=275, max=1268, avg=377.75, stdev=90.71 00:41:44.984 clat percentiles (usec): 00:41:44.984 | 1.00th=[ 277], 5.00th=[ 293], 10.00th=[ 302], 20.00th=[ 314], 00:41:44.984 | 30.00th=[ 318], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 334], 00:41:44.984 | 70.00th=[ 347], 80.00th=[ 392], 90.00th=[ 510], 95.00th=[ 586], 00:41:44.984 | 99.00th=[ 611], 99.50th=[ 619], 99.90th=[ 1156], 99.95th=[ 1254], 00:41:44.984 | 99.99th=[ 1254] 00:41:44.984 write: IOPS=1599, BW=6398KiB/s (6551kB/s)(6404KiB/1001msec); 0 zone resets 00:41:44.984 slat (nsec): min=8605, max=63255, avg=17652.10, stdev=8052.66 00:41:44.984 clat (usec): min=185, max=446, avg=234.97, stdev=38.81 00:41:44.984 lat (usec): min=195, max=472, avg=252.62, stdev=43.07 00:41:44.984 clat percentiles (usec): 00:41:44.984 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 196], 20.00th=[ 206], 00:41:44.984 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 233], 00:41:44.984 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 265], 95.00th=[ 293], 00:41:44.984 | 99.00th=[ 412], 99.50th=[ 420], 99.90th=[ 437], 99.95th=[ 449], 00:41:44.984 | 99.99th=[ 449] 00:41:44.984 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:41:44.984 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:41:44.984 lat (usec) : 250=38.09%, 500=56.84%, 750=5.00% 00:41:44.984 lat (msec) : 2=0.06% 00:41:44.984 cpu : usr=4.40%, sys=6.80%, ctx=3137, majf=0, minf=1 00:41:44.984 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:44.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:44.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:44.984 issued rwts: total=1536,1601,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:44.984 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:44.984 00:41:44.984 Run status group 0 (all jobs): 00:41:44.984 READ: bw=6138KiB/s (6285kB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=6144KiB (6291kB), run=1001-1001msec 00:41:44.984 WRITE: bw=6398KiB/s (6551kB/s), 6398KiB/s-6398KiB/s (6551kB/s-6551kB/s), io=6404KiB (6558kB), run=1001-1001msec 00:41:44.984 00:41:44.984 Disk stats (read/write): 00:41:44.984 nvme0n1: ios=1345/1536, merge=0/0, ticks=473/340, in_queue=813, util=91.58% 00:41:44.984 12:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:45.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:41:45.243 12:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:45.243 12:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:41:45.243 12:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:41:45.243 12:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:45.243 12:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:41:45.243 12:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:45.243 12:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:41:45.243 12:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:41:45.243 12:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:41:45.243 12:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:45.243 12:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:41:45.243 12:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:45.243 12:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:41:45.243 12:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:45.243 12:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:45.243 rmmod nvme_tcp 00:41:45.243 rmmod nvme_fabrics 00:41:45.243 rmmod nvme_keyring 00:41:45.243 12:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:45.243 12:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:41:45.243 12:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:41:45.243 12:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3186616 ']' 00:41:45.243 12:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3186616 00:41:45.243 12:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3186616 ']' 00:41:45.243 12:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3186616 00:41:45.243 12:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:41:45.243 12:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:45.243 12:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3186616 00:41:45.243 12:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:45.243 12:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:45.244 12:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3186616' 00:41:45.244 killing process with pid 3186616 00:41:45.244 12:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3186616 00:41:45.244 12:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3186616 00:41:46.622 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:46.622 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:46.622 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:46.622 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:41:46.622 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:41:46.622 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:41:46.622 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:46.622 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:46.622 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:46.622 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:46.622 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:46.622 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:49.160 00:41:49.160 real 0m11.322s 00:41:49.160 user 0m19.617s 00:41:49.160 sys 0m3.984s 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:49.160 ************************************ 00:41:49.160 END TEST nvmf_nmic 00:41:49.160 ************************************ 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:49.160 ************************************ 00:41:49.160 START TEST nvmf_fio_target 00:41:49.160 ************************************ 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:41:49.160 * Looking for test storage... 00:41:49.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:49.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:49.160 --rc genhtml_branch_coverage=1 00:41:49.160 --rc genhtml_function_coverage=1 00:41:49.160 --rc genhtml_legend=1 00:41:49.160 --rc geninfo_all_blocks=1 00:41:49.160 --rc geninfo_unexecuted_blocks=1 00:41:49.160 00:41:49.160 ' 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:49.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:49.160 --rc genhtml_branch_coverage=1 00:41:49.160 --rc genhtml_function_coverage=1 00:41:49.160 --rc genhtml_legend=1 00:41:49.160 --rc geninfo_all_blocks=1 00:41:49.160 --rc geninfo_unexecuted_blocks=1 00:41:49.160 00:41:49.160 ' 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:49.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:49.160 --rc genhtml_branch_coverage=1 00:41:49.160 --rc genhtml_function_coverage=1 00:41:49.160 --rc genhtml_legend=1 00:41:49.160 --rc geninfo_all_blocks=1 00:41:49.160 --rc geninfo_unexecuted_blocks=1 00:41:49.160 00:41:49.160 ' 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:49.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:49.160 --rc genhtml_branch_coverage=1 00:41:49.160 --rc genhtml_function_coverage=1 00:41:49.160 --rc genhtml_legend=1 00:41:49.160 --rc geninfo_all_blocks=1 00:41:49.160 --rc geninfo_unexecuted_blocks=1 00:41:49.160 00:41:49.160 ' 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:49.160 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:41:49.161 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:51.066 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:51.066 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:51.066 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:51.066 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:51.066 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:51.067 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:51.067 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:41:51.067 00:41:51.067 --- 10.0.0.2 ping statistics --- 00:41:51.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:51.067 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:51.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:51.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:41:51.067 00:41:51.067 --- 10.0.0.1 ping statistics --- 00:41:51.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:51.067 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3189336 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3189336 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3189336 ']' 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:51.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:51.067 12:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:51.067 [2024-11-18 12:10:16.865243] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:51.067 [2024-11-18 12:10:16.867943] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:41:51.067 [2024-11-18 12:10:16.868043] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:51.325 [2024-11-18 12:10:17.018650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:51.325 [2024-11-18 12:10:17.155395] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:51.325 [2024-11-18 12:10:17.155474] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:51.325 [2024-11-18 12:10:17.155523] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:51.325 [2024-11-18 12:10:17.155545] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:51.325 [2024-11-18 12:10:17.155570] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:51.325 [2024-11-18 12:10:17.158300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:51.325 [2024-11-18 12:10:17.158371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:51.325 [2024-11-18 12:10:17.158465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:51.325 [2024-11-18 12:10:17.158474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:51.892 [2024-11-18 12:10:17.520486] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:51.892 [2024-11-18 12:10:17.532819] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:51.892 [2024-11-18 12:10:17.532975] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:51.892 [2024-11-18 12:10:17.533761] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:51.892 [2024-11-18 12:10:17.534136] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:52.150 12:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:52.150 12:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:41:52.150 12:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:52.150 12:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:52.150 12:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:52.150 12:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:52.150 12:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:41:52.409 [2024-11-18 12:10:18.115583] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:52.409 12:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:52.669 12:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:41:52.669 12:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:53.238 12:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:41:53.238 12:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:53.500 12:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:41:53.500 12:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:53.759 12:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:41:53.759 12:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:41:54.018 12:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:54.584 12:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:41:54.584 12:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:54.843 12:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:41:54.843 12:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:55.103 12:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:41:55.103 12:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:41:55.363 12:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:55.623 12:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:41:55.623 12:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:55.882 12:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:41:55.882 12:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:41:56.141 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:56.399 [2024-11-18 12:10:22.259712] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:56.399 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:41:56.967 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:41:56.967 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:57.226 12:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:41:57.226 12:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:41:57.226 12:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:41:57.226 12:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:41:57.226 12:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:41:57.226 12:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:41:59.761 12:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:41:59.761 12:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:41:59.761 12:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:41:59.761 12:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:41:59.761 12:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:41:59.761 12:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:41:59.761 12:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:41:59.761 [global] 00:41:59.761 thread=1 00:41:59.761 invalidate=1 00:41:59.761 rw=write 00:41:59.761 time_based=1 00:41:59.761 runtime=1 00:41:59.761 ioengine=libaio 00:41:59.761 direct=1 00:41:59.761 bs=4096 00:41:59.761 iodepth=1 00:41:59.761 norandommap=0 00:41:59.761 numjobs=1 00:41:59.761 00:41:59.761 verify_dump=1 00:41:59.761 verify_backlog=512 00:41:59.761 verify_state_save=0 00:41:59.761 do_verify=1 00:41:59.761 verify=crc32c-intel 00:41:59.761 [job0] 00:41:59.761 filename=/dev/nvme0n1 00:41:59.761 [job1] 00:41:59.761 filename=/dev/nvme0n2 00:41:59.761 [job2] 00:41:59.761 filename=/dev/nvme0n3 00:41:59.761 [job3] 00:41:59.761 filename=/dev/nvme0n4 00:41:59.761 Could not set queue depth (nvme0n1) 00:41:59.761 Could not set queue depth (nvme0n2) 00:41:59.761 Could not set queue depth (nvme0n3) 00:41:59.761 Could not set queue depth (nvme0n4) 00:41:59.761 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:59.761 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:59.761 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:59.761 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:59.761 fio-3.35 00:41:59.761 Starting 4 threads 00:42:01.214 00:42:01.214 job0: (groupid=0, jobs=1): err= 0: pid=3190529: Mon Nov 18 12:10:26 2024 00:42:01.214 read: IOPS=22, BW=89.9KiB/s (92.1kB/s)(92.0KiB/1023msec) 00:42:01.214 slat (nsec): min=7599, max=34945, avg=21323.22, stdev=8977.61 00:42:01.214 clat (usec): min=292, max=42082, avg=37916.14, stdev=11877.83 00:42:01.214 lat (usec): min=304, max=42108, avg=37937.46, stdev=11880.33 00:42:01.214 clat percentiles (usec): 00:42:01.214 | 1.00th=[ 293], 5.00th=[ 318], 10.00th=[40633], 20.00th=[41157], 00:42:01.214 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:42:01.214 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:42:01.214 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:42:01.214 | 99.99th=[42206] 00:42:01.214 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:42:01.214 slat (nsec): min=7750, max=49786, avg=21236.70, stdev=5727.37 00:42:01.214 clat (usec): min=224, max=590, avg=266.37, stdev=29.49 00:42:01.214 lat (usec): min=233, max=599, avg=287.61, stdev=30.23 00:42:01.214 clat percentiles (usec): 00:42:01.214 | 1.00th=[ 231], 5.00th=[ 241], 10.00th=[ 245], 20.00th=[ 249], 00:42:01.214 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 265], 00:42:01.214 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 293], 95.00th=[ 310], 00:42:01.214 | 99.00th=[ 412], 99.50th=[ 433], 99.90th=[ 594], 99.95th=[ 594], 00:42:01.214 | 99.99th=[ 594] 00:42:01.214 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:42:01.214 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:42:01.214 lat (usec) : 250=20.00%, 500=75.89%, 750=0.19% 00:42:01.214 lat (msec) : 50=3.93% 00:42:01.214 cpu : usr=0.68%, sys=1.37%, ctx=535, majf=0, minf=1 00:42:01.214 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:01.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.214 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.214 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:01.214 job1: (groupid=0, jobs=1): err= 0: pid=3190535: Mon Nov 18 12:10:26 2024 00:42:01.214 read: IOPS=21, BW=84.6KiB/s (86.6kB/s)(88.0KiB/1040msec) 00:42:01.214 slat (nsec): min=12341, max=38086, avg=20996.00, stdev=9483.90 00:42:01.214 clat (usec): min=363, max=40997, avg=39119.05, stdev=8656.26 00:42:01.214 lat (usec): min=383, max=41027, avg=39140.05, stdev=8656.46 00:42:01.214 clat percentiles (usec): 00:42:01.214 | 1.00th=[ 363], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:42:01.214 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:01.214 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:01.214 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:42:01.214 | 99.99th=[41157] 00:42:01.214 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:42:01.214 slat (usec): min=10, max=1687, avg=29.24, stdev=73.89 00:42:01.214 clat (usec): min=226, max=3752, avg=312.70, stdev=216.75 00:42:01.214 lat (usec): min=241, max=3788, avg=341.94, stdev=241.68 00:42:01.214 clat percentiles (usec): 00:42:01.214 | 1.00th=[ 229], 5.00th=[ 235], 10.00th=[ 237], 20.00th=[ 241], 00:42:01.214 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 258], 00:42:01.214 | 70.00th=[ 269], 80.00th=[ 297], 90.00th=[ 474], 95.00th=[ 611], 00:42:01.214 | 99.00th=[ 1123], 99.50th=[ 1532], 99.90th=[ 3752], 99.95th=[ 3752], 00:42:01.214 | 99.99th=[ 3752] 00:42:01.214 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:42:01.214 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:42:01.214 lat (usec) : 250=40.64%, 500=46.63%, 750=6.74%, 1000=0.75% 00:42:01.214 lat (msec) : 2=1.12%, 4=0.19%, 50=3.93% 00:42:01.214 cpu : usr=0.77%, sys=1.64%, ctx=535, majf=0, minf=1 00:42:01.214 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:01.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.214 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.214 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:01.214 job2: (groupid=0, jobs=1): err= 0: pid=3190536: Mon Nov 18 12:10:26 2024 00:42:01.214 read: IOPS=190, BW=762KiB/s (781kB/s)(780KiB/1023msec) 00:42:01.214 slat (nsec): min=7889, max=61975, avg=23968.83, stdev=11466.79 00:42:01.214 clat (usec): min=279, max=43009, avg=4366.68, stdev=12308.75 00:42:01.214 lat (usec): min=293, max=43053, avg=4390.65, stdev=12307.19 00:42:01.214 clat percentiles (usec): 00:42:01.214 | 1.00th=[ 281], 5.00th=[ 285], 10.00th=[ 285], 20.00th=[ 293], 00:42:01.214 | 30.00th=[ 302], 40.00th=[ 326], 50.00th=[ 347], 60.00th=[ 355], 00:42:01.214 | 70.00th=[ 363], 80.00th=[ 379], 90.00th=[ 562], 95.00th=[41681], 00:42:01.214 | 99.00th=[42206], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:42:01.214 | 99.99th=[43254] 00:42:01.214 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:42:01.214 slat (usec): min=6, max=1654, avg=23.22, stdev=72.58 00:42:01.214 clat (usec): min=215, max=1232, avg=294.18, stdev=102.70 00:42:01.214 lat (usec): min=231, max=1928, avg=317.40, stdev=125.88 00:42:01.214 clat percentiles (usec): 00:42:01.214 | 1.00th=[ 225], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 249], 00:42:01.214 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 273], 00:42:01.214 | 70.00th=[ 281], 80.00th=[ 297], 90.00th=[ 363], 95.00th=[ 469], 00:42:01.214 | 99.00th=[ 635], 99.50th=[ 1090], 99.90th=[ 1237], 99.95th=[ 1237], 00:42:01.214 | 99.99th=[ 1237] 00:42:01.215 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:42:01.215 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:42:01.215 lat (usec) : 250=15.28%, 500=79.21%, 750=2.26% 00:42:01.215 lat (msec) : 2=0.57%, 50=2.69% 00:42:01.215 cpu : usr=0.98%, sys=1.17%, ctx=708, majf=0, minf=1 00:42:01.215 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:01.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.215 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.215 issued rwts: total=195,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.215 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:01.215 job3: (groupid=0, jobs=1): err= 0: pid=3190537: Mon Nov 18 12:10:26 2024 00:42:01.215 read: IOPS=20, BW=82.4KiB/s (84.3kB/s)(84.0KiB/1020msec) 00:42:01.215 slat (nsec): min=12791, max=34380, avg=18704.33, stdev=7955.85 00:42:01.215 clat (usec): min=40868, max=41033, avg=40968.12, stdev=34.79 00:42:01.215 lat (usec): min=40884, max=41046, avg=40986.82, stdev=32.86 00:42:01.215 clat percentiles (usec): 00:42:01.215 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:42:01.215 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:01.215 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:01.215 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:42:01.215 | 99.99th=[41157] 00:42:01.215 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:42:01.215 slat (usec): min=8, max=1680, avg=23.51, stdev=73.93 00:42:01.215 clat (usec): min=204, max=1729, avg=280.93, stdev=138.75 00:42:01.215 lat (usec): min=214, max=2058, avg=304.44, stdev=161.26 00:42:01.215 clat percentiles (usec): 00:42:01.215 | 1.00th=[ 210], 5.00th=[ 217], 10.00th=[ 219], 20.00th=[ 223], 00:42:01.215 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 239], 00:42:01.215 | 70.00th=[ 251], 80.00th=[ 281], 90.00th=[ 453], 95.00th=[ 519], 00:42:01.215 | 99.00th=[ 947], 99.50th=[ 1254], 99.90th=[ 1729], 99.95th=[ 1729], 00:42:01.215 | 99.99th=[ 1729] 00:42:01.215 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:42:01.215 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:42:01.215 lat (usec) : 250=66.23%, 500=23.08%, 750=5.63%, 1000=0.38% 00:42:01.215 lat (msec) : 2=0.75%, 50=3.94% 00:42:01.215 cpu : usr=0.29%, sys=1.08%, ctx=535, majf=0, minf=1 00:42:01.215 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:01.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.215 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.215 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.215 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:01.215 00:42:01.215 Run status group 0 (all jobs): 00:42:01.215 READ: bw=1004KiB/s (1028kB/s), 82.4KiB/s-762KiB/s (84.3kB/s-781kB/s), io=1044KiB (1069kB), run=1020-1040msec 00:42:01.215 WRITE: bw=7877KiB/s (8066kB/s), 1969KiB/s-2008KiB/s (2016kB/s-2056kB/s), io=8192KiB (8389kB), run=1020-1040msec 00:42:01.215 00:42:01.215 Disk stats (read/write): 00:42:01.215 nvme0n1: ios=66/512, merge=0/0, ticks=992/135, in_queue=1127, util=89.18% 00:42:01.215 nvme0n2: ios=67/512, merge=0/0, ticks=742/152, in_queue=894, util=90.53% 00:42:01.215 nvme0n3: ios=250/512, merge=0/0, ticks=783/147, in_queue=930, util=93.19% 00:42:01.215 nvme0n4: ios=73/512, merge=0/0, ticks=753/131, in_queue=884, util=92.28% 00:42:01.215 12:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:42:01.215 [global] 00:42:01.215 thread=1 00:42:01.215 invalidate=1 00:42:01.215 rw=randwrite 00:42:01.215 time_based=1 00:42:01.215 runtime=1 00:42:01.215 ioengine=libaio 00:42:01.215 direct=1 00:42:01.215 bs=4096 00:42:01.215 iodepth=1 00:42:01.215 norandommap=0 00:42:01.215 numjobs=1 00:42:01.215 00:42:01.215 verify_dump=1 00:42:01.215 verify_backlog=512 00:42:01.215 verify_state_save=0 00:42:01.215 do_verify=1 00:42:01.215 verify=crc32c-intel 00:42:01.215 [job0] 00:42:01.215 filename=/dev/nvme0n1 00:42:01.215 [job1] 00:42:01.215 filename=/dev/nvme0n2 00:42:01.215 [job2] 00:42:01.215 filename=/dev/nvme0n3 00:42:01.215 [job3] 00:42:01.215 filename=/dev/nvme0n4 00:42:01.215 Could not set queue depth (nvme0n1) 00:42:01.215 Could not set queue depth (nvme0n2) 00:42:01.215 Could not set queue depth (nvme0n3) 00:42:01.215 Could not set queue depth (nvme0n4) 00:42:01.215 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:01.215 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:01.215 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:01.215 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:01.215 fio-3.35 00:42:01.215 Starting 4 threads 00:42:02.591 00:42:02.591 job0: (groupid=0, jobs=1): err= 0: pid=3190763: Mon Nov 18 12:10:28 2024 00:42:02.591 read: IOPS=719, BW=2877KiB/s (2946kB/s)(2880KiB/1001msec) 00:42:02.591 slat (nsec): min=5469, max=52691, avg=14119.18, stdev=7387.66 00:42:02.591 clat (usec): min=264, max=41126, avg=920.44, stdev=4505.17 00:42:02.591 lat (usec): min=270, max=41178, avg=934.56, stdev=4505.59 00:42:02.591 clat percentiles (usec): 00:42:02.591 | 1.00th=[ 273], 5.00th=[ 293], 10.00th=[ 322], 20.00th=[ 355], 00:42:02.591 | 30.00th=[ 367], 40.00th=[ 379], 50.00th=[ 392], 60.00th=[ 404], 00:42:02.591 | 70.00th=[ 424], 80.00th=[ 461], 90.00th=[ 529], 95.00th=[ 586], 00:42:02.591 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:42:02.591 | 99.99th=[41157] 00:42:02.591 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:42:02.591 slat (nsec): min=7700, max=56885, avg=17708.22, stdev=9313.43 00:42:02.591 clat (usec): min=189, max=833, avg=294.09, stdev=63.94 00:42:02.591 lat (usec): min=197, max=841, avg=311.80, stdev=66.01 00:42:02.591 clat percentiles (usec): 00:42:02.591 | 1.00th=[ 196], 5.00th=[ 208], 10.00th=[ 217], 20.00th=[ 239], 00:42:02.591 | 30.00th=[ 262], 40.00th=[ 273], 50.00th=[ 285], 60.00th=[ 302], 00:42:02.591 | 70.00th=[ 314], 80.00th=[ 338], 90.00th=[ 383], 95.00th=[ 416], 00:42:02.591 | 99.00th=[ 465], 99.50th=[ 482], 99.90th=[ 537], 99.95th=[ 832], 00:42:02.591 | 99.99th=[ 832] 00:42:02.591 bw ( KiB/s): min= 4096, max= 4096, per=20.11%, avg=4096.00, stdev= 0.00, samples=1 00:42:02.591 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:42:02.591 lat (usec) : 250=13.59%, 500=80.39%, 750=5.16%, 1000=0.17% 00:42:02.591 lat (msec) : 2=0.11%, 4=0.06%, 50=0.52% 00:42:02.591 cpu : usr=2.40%, sys=3.20%, ctx=1745, majf=0, minf=1 00:42:02.591 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:02.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.591 issued rwts: total=720,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:02.591 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:02.591 job1: (groupid=0, jobs=1): err= 0: pid=3190764: Mon Nov 18 12:10:28 2024 00:42:02.591 read: IOPS=753, BW=3013KiB/s (3085kB/s)(3016KiB/1001msec) 00:42:02.591 slat (nsec): min=5916, max=64685, avg=14819.82, stdev=9660.29 00:42:02.591 clat (usec): min=261, max=41534, avg=813.19, stdev=3606.02 00:42:02.591 lat (usec): min=268, max=41568, avg=828.01, stdev=3606.84 00:42:02.591 clat percentiles (usec): 00:42:02.591 | 1.00th=[ 265], 5.00th=[ 334], 10.00th=[ 371], 20.00th=[ 396], 00:42:02.591 | 30.00th=[ 429], 40.00th=[ 449], 50.00th=[ 478], 60.00th=[ 506], 00:42:02.591 | 70.00th=[ 537], 80.00th=[ 594], 90.00th=[ 652], 95.00th=[ 693], 00:42:02.591 | 99.00th=[ 840], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:42:02.591 | 99.99th=[41681] 00:42:02.591 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:42:02.591 slat (nsec): min=6781, max=73169, avg=20049.77, stdev=11537.49 00:42:02.591 clat (usec): min=211, max=642, avg=338.75, stdev=61.71 00:42:02.591 lat (usec): min=226, max=665, avg=358.79, stdev=60.80 00:42:02.591 clat percentiles (usec): 00:42:02.591 | 1.00th=[ 227], 5.00th=[ 253], 10.00th=[ 265], 20.00th=[ 285], 00:42:02.591 | 30.00th=[ 302], 40.00th=[ 318], 50.00th=[ 330], 60.00th=[ 347], 00:42:02.591 | 70.00th=[ 371], 80.00th=[ 400], 90.00th=[ 424], 95.00th=[ 445], 00:42:02.591 | 99.00th=[ 482], 99.50th=[ 486], 99.90th=[ 586], 99.95th=[ 644], 00:42:02.591 | 99.99th=[ 644] 00:42:02.591 bw ( KiB/s): min= 4912, max= 4912, per=24.11%, avg=4912.00, stdev= 0.00, samples=1 00:42:02.591 iops : min= 1228, max= 1228, avg=1228.00, stdev= 0.00, samples=1 00:42:02.591 lat (usec) : 250=2.53%, 500=79.36%, 750=17.04%, 1000=0.73% 00:42:02.591 lat (msec) : 50=0.34% 00:42:02.591 cpu : usr=2.00%, sys=4.40%, ctx=1780, majf=0, minf=1 00:42:02.591 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:02.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.591 issued rwts: total=754,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:02.591 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:02.591 job2: (groupid=0, jobs=1): err= 0: pid=3190765: Mon Nov 18 12:10:28 2024 00:42:02.592 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:42:02.592 slat (nsec): min=6336, max=60676, avg=15148.72, stdev=8487.16 00:42:02.592 clat (usec): min=230, max=892, avg=410.02, stdev=78.48 00:42:02.592 lat (usec): min=237, max=902, avg=425.17, stdev=79.28 00:42:02.592 clat percentiles (usec): 00:42:02.592 | 1.00th=[ 310], 5.00th=[ 330], 10.00th=[ 338], 20.00th=[ 351], 00:42:02.592 | 30.00th=[ 359], 40.00th=[ 367], 50.00th=[ 383], 60.00th=[ 408], 00:42:02.592 | 70.00th=[ 441], 80.00th=[ 465], 90.00th=[ 515], 95.00th=[ 562], 00:42:02.592 | 99.00th=[ 644], 99.50th=[ 742], 99.90th=[ 865], 99.95th=[ 889], 00:42:02.592 | 99.99th=[ 889] 00:42:02.592 write: IOPS=1512, BW=6050KiB/s (6195kB/s)(6056KiB/1001msec); 0 zone resets 00:42:02.592 slat (nsec): min=8763, max=77101, avg=22686.94, stdev=11749.43 00:42:02.592 clat (usec): min=205, max=616, avg=341.45, stdev=65.01 00:42:02.592 lat (usec): min=216, max=648, avg=364.14, stdev=67.14 00:42:02.592 clat percentiles (usec): 00:42:02.592 | 1.00th=[ 231], 5.00th=[ 258], 10.00th=[ 269], 20.00th=[ 289], 00:42:02.592 | 30.00th=[ 302], 40.00th=[ 314], 50.00th=[ 326], 60.00th=[ 338], 00:42:02.592 | 70.00th=[ 367], 80.00th=[ 400], 90.00th=[ 433], 95.00th=[ 469], 00:42:02.592 | 99.00th=[ 519], 99.50th=[ 537], 99.90th=[ 578], 99.95th=[ 619], 00:42:02.592 | 99.99th=[ 619] 00:42:02.592 bw ( KiB/s): min= 6096, max= 6096, per=29.92%, avg=6096.00, stdev= 0.00, samples=1 00:42:02.592 iops : min= 1524, max= 1524, avg=1524.00, stdev= 0.00, samples=1 00:42:02.592 lat (usec) : 250=1.81%, 500=92.32%, 750=5.67%, 1000=0.20% 00:42:02.592 cpu : usr=3.10%, sys=6.80%, ctx=2539, majf=0, minf=1 00:42:02.592 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:02.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.592 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.592 issued rwts: total=1024,1514,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:02.592 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:02.592 job3: (groupid=0, jobs=1): err= 0: pid=3190766: Mon Nov 18 12:10:28 2024 00:42:02.592 read: IOPS=1039, BW=4160KiB/s (4260kB/s)(4164KiB/1001msec) 00:42:02.592 slat (nsec): min=4583, max=80624, avg=20876.47, stdev=12808.06 00:42:02.592 clat (usec): min=270, max=40974, avg=488.98, stdev=1769.58 00:42:02.592 lat (usec): min=277, max=40985, avg=509.86, stdev=1769.17 00:42:02.592 clat percentiles (usec): 00:42:02.592 | 1.00th=[ 322], 5.00th=[ 338], 10.00th=[ 351], 20.00th=[ 363], 00:42:02.592 | 30.00th=[ 379], 40.00th=[ 396], 50.00th=[ 408], 60.00th=[ 416], 00:42:02.592 | 70.00th=[ 429], 80.00th=[ 441], 90.00th=[ 478], 95.00th=[ 515], 00:42:02.592 | 99.00th=[ 652], 99.50th=[ 725], 99.90th=[40633], 99.95th=[41157], 00:42:02.592 | 99.99th=[41157] 00:42:02.592 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:42:02.592 slat (nsec): min=7189, max=73537, avg=14686.88, stdev=6103.16 00:42:02.592 clat (usec): min=205, max=1327, avg=283.22, stdev=70.33 00:42:02.592 lat (usec): min=221, max=1367, avg=297.91, stdev=71.04 00:42:02.592 clat percentiles (usec): 00:42:02.592 | 1.00th=[ 227], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 245], 00:42:02.592 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 260], 60.00th=[ 269], 00:42:02.592 | 70.00th=[ 277], 80.00th=[ 302], 90.00th=[ 388], 95.00th=[ 404], 00:42:02.592 | 99.00th=[ 502], 99.50th=[ 529], 99.90th=[ 1074], 99.95th=[ 1336], 00:42:02.592 | 99.99th=[ 1336] 00:42:02.592 bw ( KiB/s): min= 4688, max= 4688, per=23.01%, avg=4688.00, stdev= 0.00, samples=1 00:42:02.592 iops : min= 1172, max= 1172, avg=1172.00, stdev= 0.00, samples=1 00:42:02.592 lat (usec) : 250=17.58%, 500=79.08%, 750=3.03%, 1000=0.08% 00:42:02.592 lat (msec) : 2=0.16%, 50=0.08% 00:42:02.592 cpu : usr=1.40%, sys=5.50%, ctx=2577, majf=0, minf=1 00:42:02.592 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:02.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.592 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.592 issued rwts: total=1041,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:02.592 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:02.592 00:42:02.592 Run status group 0 (all jobs): 00:42:02.592 READ: bw=13.8MiB/s (14.5MB/s), 2877KiB/s-4160KiB/s (2946kB/s-4260kB/s), io=13.8MiB (14.5MB), run=1001-1001msec 00:42:02.592 WRITE: bw=19.9MiB/s (20.9MB/s), 4092KiB/s-6138KiB/s (4190kB/s-6285kB/s), io=19.9MiB (20.9MB), run=1001-1001msec 00:42:02.592 00:42:02.592 Disk stats (read/write): 00:42:02.592 nvme0n1: ios=561/1005, merge=0/0, ticks=892/271, in_queue=1163, util=86.07% 00:42:02.592 nvme0n2: ios=614/1024, merge=0/0, ticks=891/325, in_queue=1216, util=90.05% 00:42:02.592 nvme0n3: ios=1081/1134, merge=0/0, ticks=988/348, in_queue=1336, util=93.76% 00:42:02.592 nvme0n4: ios=1081/1050, merge=0/0, ticks=540/303, in_queue=843, util=95.81% 00:42:02.592 12:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:42:02.592 [global] 00:42:02.592 thread=1 00:42:02.592 invalidate=1 00:42:02.592 rw=write 00:42:02.592 time_based=1 00:42:02.592 runtime=1 00:42:02.592 ioengine=libaio 00:42:02.592 direct=1 00:42:02.592 bs=4096 00:42:02.592 iodepth=128 00:42:02.592 norandommap=0 00:42:02.592 numjobs=1 00:42:02.592 00:42:02.592 verify_dump=1 00:42:02.592 verify_backlog=512 00:42:02.592 verify_state_save=0 00:42:02.592 do_verify=1 00:42:02.592 verify=crc32c-intel 00:42:02.592 [job0] 00:42:02.592 filename=/dev/nvme0n1 00:42:02.592 [job1] 00:42:02.592 filename=/dev/nvme0n2 00:42:02.592 [job2] 00:42:02.592 filename=/dev/nvme0n3 00:42:02.592 [job3] 00:42:02.592 filename=/dev/nvme0n4 00:42:02.592 Could not set queue depth (nvme0n1) 00:42:02.592 Could not set queue depth (nvme0n2) 00:42:02.592 Could not set queue depth (nvme0n3) 00:42:02.592 Could not set queue depth (nvme0n4) 00:42:02.592 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:02.592 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:02.592 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:02.592 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:02.592 fio-3.35 00:42:02.592 Starting 4 threads 00:42:03.967 00:42:03.967 job0: (groupid=0, jobs=1): err= 0: pid=3191064: Mon Nov 18 12:10:29 2024 00:42:03.967 read: IOPS=2447, BW=9792KiB/s (10.0MB/s)(9880KiB/1009msec) 00:42:03.967 slat (usec): min=2, max=20977, avg=193.13, stdev=1269.60 00:42:03.967 clat (usec): min=1093, max=92169, avg=20255.45, stdev=13391.54 00:42:03.967 lat (usec): min=8941, max=92174, avg=20448.58, stdev=13494.16 00:42:03.967 clat percentiles (usec): 00:42:03.967 | 1.00th=[ 9241], 5.00th=[ 9765], 10.00th=[12518], 20.00th=[13173], 00:42:03.967 | 30.00th=[13435], 40.00th=[13698], 50.00th=[14091], 60.00th=[14746], 00:42:03.967 | 70.00th=[16581], 80.00th=[28705], 90.00th=[39584], 95.00th=[51643], 00:42:03.967 | 99.00th=[66323], 99.50th=[66323], 99.90th=[91751], 99.95th=[91751], 00:42:03.967 | 99.99th=[91751] 00:42:03.967 write: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec); 0 zone resets 00:42:03.967 slat (usec): min=3, max=17416, avg=202.90, stdev=1359.19 00:42:03.967 clat (usec): min=9034, max=80306, avg=30026.67, stdev=18153.03 00:42:03.967 lat (usec): min=9039, max=80311, avg=30229.57, stdev=18213.92 00:42:03.967 clat percentiles (usec): 00:42:03.967 | 1.00th=[ 9503], 5.00th=[13042], 10.00th=[13304], 20.00th=[13566], 00:42:03.967 | 30.00th=[13829], 40.00th=[14091], 50.00th=[20579], 60.00th=[33817], 00:42:03.967 | 70.00th=[42730], 80.00th=[50594], 90.00th=[53740], 95.00th=[57934], 00:42:03.967 | 99.00th=[80217], 99.50th=[80217], 99.90th=[80217], 99.95th=[80217], 00:42:03.967 | 99.99th=[80217] 00:42:03.967 bw ( KiB/s): min= 8192, max=12288, per=21.28%, avg=10240.00, stdev=2896.31, samples=2 00:42:03.967 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:42:03.967 lat (msec) : 2=0.02%, 10=4.14%, 20=56.26%, 50=25.03%, 100=14.55% 00:42:03.967 cpu : usr=0.99%, sys=2.08%, ctx=184, majf=0, minf=1 00:42:03.967 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:42:03.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:03.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:03.967 issued rwts: total=2470,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:03.967 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:03.967 job1: (groupid=0, jobs=1): err= 0: pid=3191096: Mon Nov 18 12:10:29 2024 00:42:03.967 read: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec) 00:42:03.967 slat (usec): min=3, max=5922, avg=92.58, stdev=612.02 00:42:03.967 clat (usec): min=8317, max=20700, avg=11996.79, stdev=2018.37 00:42:03.967 lat (usec): min=8323, max=20705, avg=12089.37, stdev=2065.16 00:42:03.967 clat percentiles (usec): 00:42:03.967 | 1.00th=[ 8717], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10290], 00:42:03.967 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11469], 60.00th=[11994], 00:42:03.967 | 70.00th=[12518], 80.00th=[13698], 90.00th=[15008], 95.00th=[16188], 00:42:03.967 | 99.00th=[17433], 99.50th=[17695], 99.90th=[18220], 99.95th=[18482], 00:42:03.967 | 99.99th=[20579] 00:42:03.967 write: IOPS=5289, BW=20.7MiB/s (21.7MB/s)(20.8MiB/1007msec); 0 zone resets 00:42:03.967 slat (usec): min=3, max=10273, avg=93.32, stdev=596.00 00:42:03.967 clat (usec): min=5869, max=37899, avg=12279.19, stdev=2402.82 00:42:03.967 lat (usec): min=5883, max=37903, avg=12372.50, stdev=2450.01 00:42:03.967 clat percentiles (usec): 00:42:03.967 | 1.00th=[ 7111], 5.00th=[ 9896], 10.00th=[10683], 20.00th=[11076], 00:42:03.967 | 30.00th=[11207], 40.00th=[11207], 50.00th=[11731], 60.00th=[12125], 00:42:03.967 | 70.00th=[12649], 80.00th=[13698], 90.00th=[14091], 95.00th=[16712], 00:42:03.967 | 99.00th=[20317], 99.50th=[26346], 99.90th=[32900], 99.95th=[32900], 00:42:03.967 | 99.99th=[38011] 00:42:03.967 bw ( KiB/s): min=19984, max=21616, per=43.23%, avg=20800.00, stdev=1154.00, samples=2 00:42:03.967 iops : min= 4996, max= 5404, avg=5200.00, stdev=288.50, samples=2 00:42:03.967 lat (msec) : 10=7.74%, 20=91.63%, 50=0.62% 00:42:03.967 cpu : usr=3.68%, sys=6.16%, ctx=372, majf=0, minf=1 00:42:03.967 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:42:03.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:03.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:03.967 issued rwts: total=5120,5327,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:03.967 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:03.967 job2: (groupid=0, jobs=1): err= 0: pid=3191110: Mon Nov 18 12:10:29 2024 00:42:03.967 read: IOPS=1519, BW=6077KiB/s (6223kB/s)(6144KiB/1011msec) 00:42:03.967 slat (usec): min=2, max=22446, avg=252.78, stdev=1615.16 00:42:03.967 clat (usec): min=9726, max=61941, avg=32616.54, stdev=12560.28 00:42:03.967 lat (usec): min=9731, max=63606, avg=32869.31, stdev=12678.22 00:42:03.967 clat percentiles (usec): 00:42:03.967 | 1.00th=[ 9765], 5.00th=[15270], 10.00th=[15401], 20.00th=[16450], 00:42:03.967 | 30.00th=[20317], 40.00th=[31851], 50.00th=[34866], 60.00th=[38011], 00:42:03.967 | 70.00th=[40109], 80.00th=[43254], 90.00th=[48497], 95.00th=[51643], 00:42:03.967 | 99.00th=[54264], 99.50th=[60556], 99.90th=[62129], 99.95th=[62129], 00:42:03.967 | 99.99th=[62129] 00:42:03.967 write: IOPS=1604, BW=6417KiB/s (6571kB/s)(6488KiB/1011msec); 0 zone resets 00:42:03.967 slat (usec): min=3, max=22818, avg=369.71, stdev=1827.12 00:42:03.967 clat (usec): min=2495, max=94238, avg=47554.81, stdev=22834.48 00:42:03.967 lat (usec): min=10451, max=94261, avg=47924.52, stdev=23009.78 00:42:03.967 clat percentiles (usec): 00:42:03.967 | 1.00th=[10683], 5.00th=[16319], 10.00th=[23987], 20.00th=[27919], 00:42:03.967 | 30.00th=[30016], 40.00th=[34866], 50.00th=[38536], 60.00th=[46400], 00:42:03.967 | 70.00th=[60031], 80.00th=[77071], 90.00th=[82314], 95.00th=[84411], 00:42:03.967 | 99.00th=[88605], 99.50th=[90702], 99.90th=[93848], 99.95th=[93848], 00:42:03.967 | 99.99th=[93848] 00:42:03.967 bw ( KiB/s): min= 4096, max= 8192, per=12.77%, avg=6144.00, stdev=2896.31, samples=2 00:42:03.967 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:42:03.967 lat (msec) : 4=0.03%, 10=0.85%, 20=13.20%, 50=63.36%, 100=22.55% 00:42:03.967 cpu : usr=0.59%, sys=1.68%, ctx=142, majf=0, minf=1 00:42:03.967 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:42:03.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:03.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:03.967 issued rwts: total=1536,1622,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:03.967 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:03.967 job3: (groupid=0, jobs=1): err= 0: pid=3191113: Mon Nov 18 12:10:29 2024 00:42:03.967 read: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec) 00:42:03.967 slat (usec): min=3, max=23011, avg=198.34, stdev=1448.58 00:42:03.967 clat (usec): min=11914, max=60991, avg=25953.72, stdev=7808.30 00:42:03.967 lat (usec): min=11920, max=61005, avg=26152.05, stdev=7934.13 00:42:03.967 clat percentiles (usec): 00:42:03.967 | 1.00th=[12780], 5.00th=[17171], 10.00th=[18482], 20.00th=[19006], 00:42:03.967 | 30.00th=[19530], 40.00th=[20841], 50.00th=[24511], 60.00th=[26346], 00:42:03.967 | 70.00th=[30540], 80.00th=[33817], 90.00th=[37487], 95.00th=[38536], 00:42:03.967 | 99.00th=[44303], 99.50th=[47973], 99.90th=[53740], 99.95th=[55313], 00:42:03.967 | 99.99th=[61080] 00:42:03.967 write: IOPS=2631, BW=10.3MiB/s (10.8MB/s)(10.4MiB/1008msec); 0 zone resets 00:42:03.967 slat (usec): min=4, max=15800, avg=182.45, stdev=1344.98 00:42:03.967 clat (usec): min=672, max=46665, avg=22995.35, stdev=4568.44 00:42:03.967 lat (usec): min=7863, max=46691, avg=23177.80, stdev=4749.92 00:42:03.967 clat percentiles (usec): 00:42:03.967 | 1.00th=[ 9241], 5.00th=[17171], 10.00th=[17695], 20.00th=[19268], 00:42:03.967 | 30.00th=[21365], 40.00th=[22152], 50.00th=[22938], 60.00th=[23987], 00:42:03.967 | 70.00th=[25560], 80.00th=[26870], 90.00th=[27657], 95.00th=[30278], 00:42:03.967 | 99.00th=[32900], 99.50th=[36439], 99.90th=[40109], 99.95th=[42206], 00:42:03.967 | 99.99th=[46924] 00:42:03.967 bw ( KiB/s): min= 8504, max=12024, per=21.33%, avg=10264.00, stdev=2489.02, samples=2 00:42:03.967 iops : min= 2126, max= 3006, avg=2566.00, stdev=622.25, samples=2 00:42:03.967 lat (usec) : 750=0.02% 00:42:03.967 lat (msec) : 10=1.21%, 20=27.64%, 50=71.03%, 100=0.10% 00:42:03.967 cpu : usr=1.89%, sys=3.08%, ctx=118, majf=0, minf=2 00:42:03.967 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:42:03.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:03.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:03.967 issued rwts: total=2560,2653,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:03.967 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:03.967 00:42:03.968 Run status group 0 (all jobs): 00:42:03.968 READ: bw=45.2MiB/s (47.3MB/s), 6077KiB/s-19.9MiB/s (6223kB/s-20.8MB/s), io=45.6MiB (47.9MB), run=1007-1011msec 00:42:03.968 WRITE: bw=47.0MiB/s (49.3MB/s), 6417KiB/s-20.7MiB/s (6571kB/s-21.7MB/s), io=47.5MiB (49.8MB), run=1007-1011msec 00:42:03.968 00:42:03.968 Disk stats (read/write): 00:42:03.968 nvme0n1: ios=2091/2182, merge=0/0, ticks=13766/16145, in_queue=29911, util=97.49% 00:42:03.968 nvme0n2: ios=4121/4359, merge=0/0, ticks=24619/25218, in_queue=49837, util=97.33% 00:42:03.968 nvme0n3: ios=1082/1367, merge=0/0, ticks=13851/19921, in_queue=33772, util=98.49% 00:42:03.968 nvme0n4: ios=2021/2048, merge=0/0, ticks=27211/23070, in_queue=50281, util=97.92% 00:42:03.968 12:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:42:03.968 [global] 00:42:03.968 thread=1 00:42:03.968 invalidate=1 00:42:03.968 rw=randwrite 00:42:03.968 time_based=1 00:42:03.968 runtime=1 00:42:03.968 ioengine=libaio 00:42:03.968 direct=1 00:42:03.968 bs=4096 00:42:03.968 iodepth=128 00:42:03.968 norandommap=0 00:42:03.968 numjobs=1 00:42:03.968 00:42:03.968 verify_dump=1 00:42:03.968 verify_backlog=512 00:42:03.968 verify_state_save=0 00:42:03.968 do_verify=1 00:42:03.968 verify=crc32c-intel 00:42:03.968 [job0] 00:42:03.968 filename=/dev/nvme0n1 00:42:03.968 [job1] 00:42:03.968 filename=/dev/nvme0n2 00:42:03.968 [job2] 00:42:03.968 filename=/dev/nvme0n3 00:42:03.968 [job3] 00:42:03.968 filename=/dev/nvme0n4 00:42:03.968 Could not set queue depth (nvme0n1) 00:42:03.968 Could not set queue depth (nvme0n2) 00:42:03.968 Could not set queue depth (nvme0n3) 00:42:03.968 Could not set queue depth (nvme0n4) 00:42:03.968 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:03.968 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:03.968 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:03.968 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:03.968 fio-3.35 00:42:03.968 Starting 4 threads 00:42:05.345 00:42:05.345 job0: (groupid=0, jobs=1): err= 0: pid=3191344: Mon Nov 18 12:10:31 2024 00:42:05.345 read: IOPS=2203, BW=8815KiB/s (9026kB/s)(9220KiB/1046msec) 00:42:05.345 slat (usec): min=2, max=9073, avg=176.63, stdev=950.30 00:42:05.345 clat (usec): min=11301, max=69068, avg=25112.12, stdev=11525.16 00:42:05.345 lat (usec): min=12556, max=70308, avg=25288.76, stdev=11533.08 00:42:05.345 clat percentiles (usec): 00:42:05.345 | 1.00th=[13435], 5.00th=[14746], 10.00th=[15139], 20.00th=[16057], 00:42:05.345 | 30.00th=[16581], 40.00th=[19268], 50.00th=[23200], 60.00th=[25822], 00:42:05.345 | 70.00th=[28181], 80.00th=[30016], 90.00th=[34866], 95.00th=[54789], 00:42:05.345 | 99.00th=[68682], 99.50th=[68682], 99.90th=[68682], 99.95th=[68682], 00:42:05.345 | 99.99th=[68682] 00:42:05.345 write: IOPS=2447, BW=9790KiB/s (10.0MB/s)(10.0MiB/1046msec); 0 zone resets 00:42:05.345 slat (usec): min=3, max=32759, avg=225.80, stdev=1414.89 00:42:05.345 clat (usec): min=10522, max=99982, avg=28919.25, stdev=16982.24 00:42:05.345 lat (msec): min=10, max=100, avg=29.15, stdev=17.11 00:42:05.345 clat percentiles (msec): 00:42:05.345 | 1.00th=[ 11], 5.00th=[ 14], 10.00th=[ 15], 20.00th=[ 16], 00:42:05.345 | 30.00th=[ 18], 40.00th=[ 21], 50.00th=[ 24], 60.00th=[ 25], 00:42:05.345 | 70.00th=[ 33], 80.00th=[ 39], 90.00th=[ 51], 95.00th=[ 68], 00:42:05.345 | 99.00th=[ 84], 99.50th=[ 84], 99.90th=[ 100], 99.95th=[ 100], 00:42:05.345 | 99.99th=[ 101] 00:42:05.346 bw ( KiB/s): min= 8960, max=11543, per=19.54%, avg=10251.50, stdev=1826.46, samples=2 00:42:05.346 iops : min= 2240, max= 2885, avg=2562.50, stdev=456.08, samples=2 00:42:05.346 lat (msec) : 20=36.30%, 50=55.15%, 100=8.55% 00:42:05.346 cpu : usr=1.91%, sys=3.35%, ctx=274, majf=0, minf=1 00:42:05.346 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:42:05.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:05.346 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:05.346 issued rwts: total=2305,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:05.346 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:05.346 job1: (groupid=0, jobs=1): err= 0: pid=3191345: Mon Nov 18 12:10:31 2024 00:42:05.346 read: IOPS=3061, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:42:05.346 slat (usec): min=2, max=17317, avg=149.23, stdev=1106.54 00:42:05.346 clat (usec): min=3230, max=73334, avg=18472.00, stdev=7865.90 00:42:05.346 lat (usec): min=3709, max=73340, avg=18621.23, stdev=7936.36 00:42:05.346 clat percentiles (usec): 00:42:05.346 | 1.00th=[ 9765], 5.00th=[10683], 10.00th=[10945], 20.00th=[12387], 00:42:05.346 | 30.00th=[13173], 40.00th=[14353], 50.00th=[16712], 60.00th=[18744], 00:42:05.346 | 70.00th=[21627], 80.00th=[24249], 90.00th=[28181], 95.00th=[30016], 00:42:05.346 | 99.00th=[38011], 99.50th=[69731], 99.90th=[72877], 99.95th=[72877], 00:42:05.346 | 99.99th=[72877] 00:42:05.346 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:42:05.346 slat (usec): min=2, max=20887, avg=141.80, stdev=850.44 00:42:05.346 clat (usec): min=2571, max=73337, avg=19726.30, stdev=9334.89 00:42:05.346 lat (usec): min=2577, max=73344, avg=19868.10, stdev=9407.86 00:42:05.346 clat percentiles (usec): 00:42:05.346 | 1.00th=[ 4817], 5.00th=[10683], 10.00th=[11469], 20.00th=[12256], 00:42:05.346 | 30.00th=[13960], 40.00th=[15795], 50.00th=[16319], 60.00th=[18744], 00:42:05.346 | 70.00th=[24249], 80.00th=[25560], 90.00th=[31065], 95.00th=[40633], 00:42:05.346 | 99.00th=[54789], 99.50th=[58459], 99.90th=[67634], 99.95th=[67634], 00:42:05.346 | 99.99th=[72877] 00:42:05.346 bw ( KiB/s): min=11328, max=16384, per=26.41%, avg=13856.00, stdev=3575.13, samples=2 00:42:05.346 iops : min= 2832, max= 4096, avg=3464.00, stdev=893.78, samples=2 00:42:05.346 lat (msec) : 4=0.47%, 10=2.51%, 20=62.91%, 50=33.19%, 100=0.93% 00:42:05.346 cpu : usr=2.89%, sys=4.78%, ctx=327, majf=0, minf=1 00:42:05.346 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:42:05.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:05.346 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:05.346 issued rwts: total=3080,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:05.346 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:05.346 job2: (groupid=0, jobs=1): err= 0: pid=3191346: Mon Nov 18 12:10:31 2024 00:42:05.346 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:42:05.346 slat (usec): min=2, max=10406, avg=141.64, stdev=777.98 00:42:05.346 clat (usec): min=10781, max=44341, avg=19042.49, stdev=7341.33 00:42:05.346 lat (usec): min=10790, max=44345, avg=19184.13, stdev=7367.65 00:42:05.346 clat percentiles (usec): 00:42:05.346 | 1.00th=[11338], 5.00th=[12256], 10.00th=[13304], 20.00th=[14091], 00:42:05.346 | 30.00th=[14615], 40.00th=[15139], 50.00th=[16319], 60.00th=[17957], 00:42:05.346 | 70.00th=[19530], 80.00th=[23462], 90.00th=[28443], 95.00th=[37487], 00:42:05.346 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:42:05.346 | 99.99th=[44303] 00:42:05.346 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:42:05.346 slat (usec): min=3, max=26005, avg=151.82, stdev=877.66 00:42:05.346 clat (usec): min=303, max=58898, avg=19017.21, stdev=8666.64 00:42:05.346 lat (usec): min=4239, max=61126, avg=19169.03, stdev=8719.57 00:42:05.346 clat percentiles (usec): 00:42:05.346 | 1.00th=[ 5145], 5.00th=[12387], 10.00th=[13960], 20.00th=[14615], 00:42:05.346 | 30.00th=[15008], 40.00th=[15533], 50.00th=[15795], 60.00th=[16057], 00:42:05.346 | 70.00th=[17171], 80.00th=[19792], 90.00th=[34341], 95.00th=[38536], 00:42:05.346 | 99.00th=[51643], 99.50th=[55313], 99.90th=[55313], 99.95th=[55313], 00:42:05.346 | 99.99th=[58983] 00:42:05.346 bw ( KiB/s): min=13056, max=14568, per=26.33%, avg=13812.00, stdev=1069.15, samples=2 00:42:05.346 iops : min= 3264, max= 3642, avg=3453.00, stdev=267.29, samples=2 00:42:05.346 lat (usec) : 500=0.02% 00:42:05.346 lat (msec) : 10=1.05%, 20=74.81%, 50=23.16%, 100=0.96% 00:42:05.346 cpu : usr=2.99%, sys=4.29%, ctx=428, majf=0, minf=2 00:42:05.346 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:42:05.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:05.346 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:05.346 issued rwts: total=3072,3581,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:05.346 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:05.346 job3: (groupid=0, jobs=1): err= 0: pid=3191347: Mon Nov 18 12:10:31 2024 00:42:05.346 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:42:05.346 slat (usec): min=2, max=9830, avg=128.52, stdev=718.04 00:42:05.346 clat (usec): min=6962, max=28601, avg=16922.39, stdev=2694.55 00:42:05.346 lat (usec): min=6968, max=28608, avg=17050.90, stdev=2696.17 00:42:05.346 clat percentiles (usec): 00:42:05.346 | 1.00th=[11863], 5.00th=[13435], 10.00th=[14615], 20.00th=[15008], 00:42:05.346 | 30.00th=[15664], 40.00th=[15926], 50.00th=[16450], 60.00th=[16712], 00:42:05.346 | 70.00th=[17433], 80.00th=[18744], 90.00th=[20317], 95.00th=[22152], 00:42:05.346 | 99.00th=[25822], 99.50th=[28705], 99.90th=[28705], 99.95th=[28705], 00:42:05.346 | 99.99th=[28705] 00:42:05.346 write: IOPS=3972, BW=15.5MiB/s (16.3MB/s)(15.6MiB/1005msec); 0 zone resets 00:42:05.346 slat (usec): min=3, max=29235, avg=128.49, stdev=824.29 00:42:05.346 clat (usec): min=496, max=45986, avg=16626.85, stdev=4941.25 00:42:05.346 lat (usec): min=882, max=45991, avg=16755.34, stdev=4930.72 00:42:05.346 clat percentiles (usec): 00:42:05.346 | 1.00th=[ 4752], 5.00th=[12256], 10.00th=[13304], 20.00th=[14615], 00:42:05.346 | 30.00th=[15795], 40.00th=[15926], 50.00th=[16188], 60.00th=[16319], 00:42:05.346 | 70.00th=[16909], 80.00th=[17433], 90.00th=[18744], 95.00th=[22414], 00:42:05.346 | 99.00th=[42206], 99.50th=[45876], 99.90th=[45876], 99.95th=[45876], 00:42:05.346 | 99.99th=[45876] 00:42:05.346 bw ( KiB/s): min=14528, max=16384, per=29.47%, avg=15456.00, stdev=1312.39, samples=2 00:42:05.346 iops : min= 3632, max= 4096, avg=3864.00, stdev=328.10, samples=2 00:42:05.346 lat (usec) : 500=0.01%, 1000=0.04% 00:42:05.346 lat (msec) : 2=0.08%, 10=1.82%, 20=89.07%, 50=8.98% 00:42:05.346 cpu : usr=3.59%, sys=5.28%, ctx=283, majf=0, minf=1 00:42:05.346 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:42:05.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:05.346 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:05.346 issued rwts: total=3584,3992,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:05.346 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:05.346 00:42:05.346 Run status group 0 (all jobs): 00:42:05.346 READ: bw=45.0MiB/s (47.1MB/s), 8815KiB/s-13.9MiB/s (9026kB/s-14.6MB/s), io=47.0MiB (49.3MB), run=1004-1046msec 00:42:05.346 WRITE: bw=51.2MiB/s (53.7MB/s), 9790KiB/s-15.5MiB/s (10.0MB/s-16.3MB/s), io=53.6MiB (56.2MB), run=1004-1046msec 00:42:05.346 00:42:05.346 Disk stats (read/write): 00:42:05.346 nvme0n1: ios=2035/2048, merge=0/0, ticks=11823/17469, in_queue=29292, util=98.30% 00:42:05.346 nvme0n2: ios=3010/3072, merge=0/0, ticks=44219/52457, in_queue=96676, util=86.80% 00:42:05.346 nvme0n3: ios=2586/2815, merge=0/0, ticks=14148/17862, in_queue=32010, util=90.93% 00:42:05.346 nvme0n4: ios=3072/3280, merge=0/0, ticks=17561/18311, in_queue=35872, util=89.71% 00:42:05.346 12:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:42:05.346 12:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3191484 00:42:05.346 12:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:42:05.346 12:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:42:05.346 [global] 00:42:05.346 thread=1 00:42:05.346 invalidate=1 00:42:05.346 rw=read 00:42:05.346 time_based=1 00:42:05.346 runtime=10 00:42:05.346 ioengine=libaio 00:42:05.346 direct=1 00:42:05.346 bs=4096 00:42:05.346 iodepth=1 00:42:05.346 norandommap=1 00:42:05.346 numjobs=1 00:42:05.346 00:42:05.346 [job0] 00:42:05.346 filename=/dev/nvme0n1 00:42:05.346 [job1] 00:42:05.346 filename=/dev/nvme0n2 00:42:05.346 [job2] 00:42:05.346 filename=/dev/nvme0n3 00:42:05.346 [job3] 00:42:05.346 filename=/dev/nvme0n4 00:42:05.346 Could not set queue depth (nvme0n1) 00:42:05.346 Could not set queue depth (nvme0n2) 00:42:05.346 Could not set queue depth (nvme0n3) 00:42:05.346 Could not set queue depth (nvme0n4) 00:42:05.605 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:05.605 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:05.605 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:05.605 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:05.605 fio-3.35 00:42:05.605 Starting 4 threads 00:42:08.889 12:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:42:08.889 12:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:42:08.889 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=286720, buflen=4096 00:42:08.889 fio: pid=3191575, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:08.889 12:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:08.889 12:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:42:08.889 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=2732032, buflen=4096 00:42:08.889 fio: pid=3191574, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:09.147 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=37560320, buflen=4096 00:42:09.147 fio: pid=3191572, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:09.147 12:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:09.148 12:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:42:09.405 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=48746496, buflen=4096 00:42:09.406 fio: pid=3191573, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:09.664 00:42:09.664 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3191572: Mon Nov 18 12:10:35 2024 00:42:09.664 read: IOPS=2616, BW=10.2MiB/s (10.7MB/s)(35.8MiB/3505msec) 00:42:09.664 slat (usec): min=4, max=15715, avg=15.59, stdev=238.60 00:42:09.664 clat (usec): min=218, max=42342, avg=361.37, stdev=1229.95 00:42:09.664 lat (usec): min=227, max=42350, avg=376.96, stdev=1253.04 00:42:09.664 clat percentiles (usec): 00:42:09.664 | 1.00th=[ 247], 5.00th=[ 253], 10.00th=[ 258], 20.00th=[ 265], 00:42:09.664 | 30.00th=[ 277], 40.00th=[ 293], 50.00th=[ 306], 60.00th=[ 314], 00:42:09.664 | 70.00th=[ 330], 80.00th=[ 379], 90.00th=[ 445], 95.00th=[ 494], 00:42:09.664 | 99.00th=[ 570], 99.50th=[ 594], 99.90th=[ 1057], 99.95th=[42206], 00:42:09.664 | 99.99th=[42206] 00:42:09.664 bw ( KiB/s): min= 3248, max=13584, per=46.44%, avg=10502.67, stdev=3858.94, samples=6 00:42:09.664 iops : min= 812, max= 3396, avg=2625.67, stdev=964.74, samples=6 00:42:09.664 lat (usec) : 250=2.18%, 500=93.62%, 750=4.07% 00:42:09.664 lat (msec) : 2=0.03%, 50=0.09% 00:42:09.664 cpu : usr=1.54%, sys=3.97%, ctx=9180, majf=0, minf=1 00:42:09.664 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:09.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:09.664 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:09.664 issued rwts: total=9171,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:09.664 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:09.664 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3191573: Mon Nov 18 12:10:35 2024 00:42:09.664 read: IOPS=3085, BW=12.1MiB/s (12.6MB/s)(46.5MiB/3857msec) 00:42:09.664 slat (usec): min=5, max=12667, avg=13.11, stdev=219.41 00:42:09.664 clat (usec): min=238, max=58326, avg=306.70, stdev=648.55 00:42:09.664 lat (usec): min=244, max=58332, avg=319.81, stdev=685.10 00:42:09.664 clat percentiles (usec): 00:42:09.664 | 1.00th=[ 247], 5.00th=[ 260], 10.00th=[ 269], 20.00th=[ 277], 00:42:09.664 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 297], 60.00th=[ 306], 00:42:09.664 | 70.00th=[ 310], 80.00th=[ 318], 90.00th=[ 326], 95.00th=[ 330], 00:42:09.664 | 99.00th=[ 347], 99.50th=[ 359], 99.90th=[ 922], 99.95th=[ 1057], 00:42:09.664 | 99.99th=[40633] 00:42:09.664 bw ( KiB/s): min=11520, max=13408, per=54.79%, avg=12392.00, stdev=612.65, samples=7 00:42:09.664 iops : min= 2880, max= 3352, avg=3098.00, stdev=153.16, samples=7 00:42:09.664 lat (usec) : 250=1.89%, 500=97.87%, 750=0.09%, 1000=0.08% 00:42:09.664 lat (msec) : 2=0.03%, 4=0.01%, 50=0.01%, 100=0.01% 00:42:09.664 cpu : usr=1.89%, sys=4.02%, ctx=11910, majf=0, minf=2 00:42:09.664 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:09.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:09.665 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:09.665 issued rwts: total=11902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:09.665 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:09.665 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3191574: Mon Nov 18 12:10:35 2024 00:42:09.665 read: IOPS=206, BW=824KiB/s (843kB/s)(2668KiB/3239msec) 00:42:09.665 slat (nsec): min=5375, max=37516, avg=9612.21, stdev=5622.59 00:42:09.665 clat (usec): min=251, max=41617, avg=4800.78, stdev=12528.22 00:42:09.665 lat (usec): min=257, max=41625, avg=4810.35, stdev=12531.02 00:42:09.665 clat percentiles (usec): 00:42:09.665 | 1.00th=[ 318], 5.00th=[ 338], 10.00th=[ 347], 20.00th=[ 363], 00:42:09.665 | 30.00th=[ 379], 40.00th=[ 433], 50.00th=[ 465], 60.00th=[ 502], 00:42:09.665 | 70.00th=[ 529], 80.00th=[ 586], 90.00th=[40633], 95.00th=[41157], 00:42:09.665 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:42:09.665 | 99.99th=[41681] 00:42:09.665 bw ( KiB/s): min= 96, max= 2992, per=3.90%, avg=881.33, stdev=1091.19, samples=6 00:42:09.665 iops : min= 24, max= 748, avg=220.33, stdev=272.80, samples=6 00:42:09.665 lat (usec) : 500=60.18%, 750=28.74%, 1000=0.15% 00:42:09.665 lat (msec) : 50=10.78% 00:42:09.665 cpu : usr=0.00%, sys=0.43%, ctx=669, majf=0, minf=1 00:42:09.665 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:09.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:09.665 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:09.665 issued rwts: total=668,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:09.665 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:09.665 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3191575: Mon Nov 18 12:10:35 2024 00:42:09.665 read: IOPS=23, BW=94.5KiB/s (96.7kB/s)(280KiB/2964msec) 00:42:09.665 slat (nsec): min=13233, max=37842, avg=20114.03, stdev=8861.63 00:42:09.665 clat (usec): min=40972, max=42955, avg=41917.36, stdev=286.22 00:42:09.665 lat (usec): min=40996, max=42979, avg=41937.55, stdev=286.48 00:42:09.665 clat percentiles (usec): 00:42:09.665 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[42206], 00:42:09.665 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:42:09.665 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:42:09.665 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:42:09.665 | 99.99th=[42730] 00:42:09.665 bw ( KiB/s): min= 88, max= 96, per=0.42%, avg=94.40, stdev= 3.58, samples=5 00:42:09.665 iops : min= 22, max= 24, avg=23.60, stdev= 0.89, samples=5 00:42:09.665 lat (msec) : 50=98.59% 00:42:09.665 cpu : usr=0.00%, sys=0.10%, ctx=72, majf=0, minf=2 00:42:09.665 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:09.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:09.665 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:09.665 issued rwts: total=71,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:09.665 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:09.665 00:42:09.665 Run status group 0 (all jobs): 00:42:09.665 READ: bw=22.1MiB/s (23.2MB/s), 94.5KiB/s-12.1MiB/s (96.7kB/s-12.6MB/s), io=85.2MiB (89.3MB), run=2964-3857msec 00:42:09.665 00:42:09.665 Disk stats (read/write): 00:42:09.665 nvme0n1: ios=8785/0, merge=0/0, ticks=3271/0, in_queue=3271, util=98.97% 00:42:09.665 nvme0n2: ios=11902/0, merge=0/0, ticks=3591/0, in_queue=3591, util=95.18% 00:42:09.665 nvme0n3: ios=664/0, merge=0/0, ticks=3082/0, in_queue=3082, util=96.79% 00:42:09.665 nvme0n4: ios=117/0, merge=0/0, ticks=3265/0, in_queue=3265, util=99.15% 00:42:09.665 12:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:09.665 12:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:42:09.923 12:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:09.923 12:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:42:10.181 12:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:10.181 12:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:42:10.439 12:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:10.439 12:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:42:11.007 12:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:11.007 12:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:42:11.267 12:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:42:11.267 12:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3191484 00:42:11.267 12:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:42:11.267 12:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:42:12.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:42:12.207 12:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:42:12.207 12:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:42:12.207 12:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:42:12.207 12:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:12.207 12:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:42:12.207 12:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:12.207 12:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:42:12.207 12:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:42:12.207 12:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:42:12.207 nvmf hotplug test: fio failed as expected 00:42:12.207 12:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:12.465 12:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:42:12.465 12:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:42:12.465 12:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:42:12.465 12:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:42:12.465 12:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:42:12.465 12:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:12.465 12:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:42:12.465 12:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:12.465 12:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:42:12.466 12:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:12.466 12:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:12.466 rmmod nvme_tcp 00:42:12.466 rmmod nvme_fabrics 00:42:12.466 rmmod nvme_keyring 00:42:12.466 12:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:12.466 12:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:42:12.466 12:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:42:12.466 12:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3189336 ']' 00:42:12.466 12:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3189336 00:42:12.466 12:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3189336 ']' 00:42:12.466 12:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3189336 00:42:12.466 12:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:42:12.466 12:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:12.466 12:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3189336 00:42:12.466 12:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:12.466 12:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:12.466 12:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3189336' 00:42:12.466 killing process with pid 3189336 00:42:12.466 12:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3189336 00:42:12.466 12:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3189336 00:42:13.843 12:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:13.843 12:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:13.843 12:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:13.843 12:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:42:13.843 12:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:42:13.843 12:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:13.843 12:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:42:13.843 12:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:13.843 12:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:13.843 12:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:13.843 12:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:13.843 12:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:15.749 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:15.749 00:42:15.749 real 0m26.895s 00:42:15.749 user 1m13.486s 00:42:15.749 sys 0m10.486s 00:42:15.749 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:15.749 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:15.749 ************************************ 00:42:15.749 END TEST nvmf_fio_target 00:42:15.749 ************************************ 00:42:15.749 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:42:15.749 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:15.749 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:15.749 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:15.749 ************************************ 00:42:15.749 START TEST nvmf_bdevio 00:42:15.749 ************************************ 00:42:15.749 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:42:15.749 * Looking for test storage... 00:42:15.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:15.749 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:15.749 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:42:15.749 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:15.749 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:15.749 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:15.749 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:15.749 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:15.749 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:42:15.749 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:42:15.749 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:42:15.749 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:42:15.749 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:42:15.749 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:42:15.749 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:42:15.749 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:15.749 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:42:15.749 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:42:15.749 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:15.749 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:15.749 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:42:15.749 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:15.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:15.750 --rc genhtml_branch_coverage=1 00:42:15.750 --rc genhtml_function_coverage=1 00:42:15.750 --rc genhtml_legend=1 00:42:15.750 --rc geninfo_all_blocks=1 00:42:15.750 --rc geninfo_unexecuted_blocks=1 00:42:15.750 00:42:15.750 ' 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:15.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:15.750 --rc genhtml_branch_coverage=1 00:42:15.750 --rc genhtml_function_coverage=1 00:42:15.750 --rc genhtml_legend=1 00:42:15.750 --rc geninfo_all_blocks=1 00:42:15.750 --rc geninfo_unexecuted_blocks=1 00:42:15.750 00:42:15.750 ' 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:15.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:15.750 --rc genhtml_branch_coverage=1 00:42:15.750 --rc genhtml_function_coverage=1 00:42:15.750 --rc genhtml_legend=1 00:42:15.750 --rc geninfo_all_blocks=1 00:42:15.750 --rc geninfo_unexecuted_blocks=1 00:42:15.750 00:42:15.750 ' 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:15.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:15.750 --rc genhtml_branch_coverage=1 00:42:15.750 --rc genhtml_function_coverage=1 00:42:15.750 --rc genhtml_legend=1 00:42:15.750 --rc geninfo_all_blocks=1 00:42:15.750 --rc geninfo_unexecuted_blocks=1 00:42:15.750 00:42:15.750 ' 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:42:15.750 12:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:42:18.287 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:42:18.287 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:42:18.287 Found net devices under 0000:0a:00.0: cvl_0_0 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:18.287 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:42:18.288 Found net devices under 0000:0a:00.1: cvl_0_1 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:18.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:18.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:42:18.288 00:42:18.288 --- 10.0.0.2 ping statistics --- 00:42:18.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:18.288 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:18.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:18.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:42:18.288 00:42:18.288 --- 10.0.0.1 ping statistics --- 00:42:18.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:18.288 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3194460 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3194460 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3194460 ']' 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:18.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:18.288 12:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:18.288 [2024-11-18 12:10:43.928195] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:18.288 [2024-11-18 12:10:43.930746] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:42:18.288 [2024-11-18 12:10:43.930874] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:18.288 [2024-11-18 12:10:44.086061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:18.547 [2024-11-18 12:10:44.233013] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:18.548 [2024-11-18 12:10:44.233105] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:18.548 [2024-11-18 12:10:44.233135] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:18.548 [2024-11-18 12:10:44.233160] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:18.548 [2024-11-18 12:10:44.233185] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:18.548 [2024-11-18 12:10:44.236178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:42:18.548 [2024-11-18 12:10:44.236267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:42:18.548 [2024-11-18 12:10:44.236350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:18.548 [2024-11-18 12:10:44.236389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:42:18.806 [2024-11-18 12:10:44.612986] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:18.806 [2024-11-18 12:10:44.621800] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:18.806 [2024-11-18 12:10:44.621993] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:18.806 [2024-11-18 12:10:44.622848] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:18.806 [2024-11-18 12:10:44.623213] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:42:19.065 12:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:19.065 12:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:42:19.065 12:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:19.065 12:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:19.065 12:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:19.065 12:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:19.065 12:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:19.065 12:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.324 12:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:19.324 [2024-11-18 12:10:44.957591] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:19.324 12:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.324 12:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:19.324 12:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.324 12:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:19.324 Malloc0 00:42:19.324 12:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.324 12:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:42:19.324 12:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.324 12:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:19.324 12:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.324 12:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:19.324 12:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.324 12:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:19.324 12:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.324 12:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:19.324 12:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.324 12:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:19.324 [2024-11-18 12:10:45.073837] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:19.324 12:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.324 12:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:42:19.324 12:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:42:19.324 12:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:42:19.324 12:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:42:19.324 12:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:19.324 12:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:19.324 { 00:42:19.324 "params": { 00:42:19.324 "name": "Nvme$subsystem", 00:42:19.324 "trtype": "$TEST_TRANSPORT", 00:42:19.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:19.324 "adrfam": "ipv4", 00:42:19.324 "trsvcid": "$NVMF_PORT", 00:42:19.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:19.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:19.324 "hdgst": ${hdgst:-false}, 00:42:19.324 "ddgst": ${ddgst:-false} 00:42:19.324 }, 00:42:19.324 "method": "bdev_nvme_attach_controller" 00:42:19.324 } 00:42:19.324 EOF 00:42:19.324 )") 00:42:19.324 12:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:42:19.324 12:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:42:19.324 12:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:42:19.324 12:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:19.324 "params": { 00:42:19.324 "name": "Nvme1", 00:42:19.324 "trtype": "tcp", 00:42:19.324 "traddr": "10.0.0.2", 00:42:19.324 "adrfam": "ipv4", 00:42:19.324 "trsvcid": "4420", 00:42:19.324 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:19.324 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:19.324 "hdgst": false, 00:42:19.324 "ddgst": false 00:42:19.324 }, 00:42:19.324 "method": "bdev_nvme_attach_controller" 00:42:19.324 }' 00:42:19.324 [2024-11-18 12:10:45.159938] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:42:19.324 [2024-11-18 12:10:45.160077] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3194615 ] 00:42:19.583 [2024-11-18 12:10:45.301312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:19.583 [2024-11-18 12:10:45.434299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:19.583 [2024-11-18 12:10:45.434346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:19.583 [2024-11-18 12:10:45.434350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:20.148 I/O targets: 00:42:20.148 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:42:20.148 00:42:20.148 00:42:20.148 CUnit - A unit testing framework for C - Version 2.1-3 00:42:20.148 http://cunit.sourceforge.net/ 00:42:20.148 00:42:20.148 00:42:20.148 Suite: bdevio tests on: Nvme1n1 00:42:20.407 Test: blockdev write read block ...passed 00:42:20.407 Test: blockdev write zeroes read block ...passed 00:42:20.407 Test: blockdev write zeroes read no split ...passed 00:42:20.407 Test: blockdev write zeroes read split ...passed 00:42:20.407 Test: blockdev write zeroes read split partial ...passed 00:42:20.407 Test: blockdev reset ...[2024-11-18 12:10:46.164375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:42:20.407 [2024-11-18 12:10:46.164566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:42:20.407 [2024-11-18 12:10:46.212714] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:42:20.407 passed 00:42:20.407 Test: blockdev write read 8 blocks ...passed 00:42:20.407 Test: blockdev write read size > 128k ...passed 00:42:20.407 Test: blockdev write read invalid size ...passed 00:42:20.667 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:42:20.667 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:42:20.667 Test: blockdev write read max offset ...passed 00:42:20.667 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:42:20.667 Test: blockdev writev readv 8 blocks ...passed 00:42:20.667 Test: blockdev writev readv 30 x 1block ...passed 00:42:20.667 Test: blockdev writev readv block ...passed 00:42:20.667 Test: blockdev writev readv size > 128k ...passed 00:42:20.667 Test: blockdev writev readv size > 128k in two iovs ...passed 00:42:20.667 Test: blockdev comparev and writev ...[2024-11-18 12:10:46.428836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:20.667 [2024-11-18 12:10:46.428893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.667 [2024-11-18 12:10:46.428933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:20.667 [2024-11-18 12:10:46.428962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:20.667 [2024-11-18 12:10:46.429513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:20.667 [2024-11-18 12:10:46.429549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:42:20.667 [2024-11-18 12:10:46.429585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:20.667 [2024-11-18 12:10:46.429612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:42:20.667 [2024-11-18 12:10:46.430161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:20.667 [2024-11-18 12:10:46.430206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:42:20.667 [2024-11-18 12:10:46.430243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:20.667 [2024-11-18 12:10:46.430270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:42:20.667 [2024-11-18 12:10:46.430823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:20.667 [2024-11-18 12:10:46.430858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:42:20.667 [2024-11-18 12:10:46.430893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:20.667 [2024-11-18 12:10:46.430920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:42:20.667 passed 00:42:20.667 Test: blockdev nvme passthru rw ...passed 00:42:20.667 Test: blockdev nvme passthru vendor specific ...[2024-11-18 12:10:46.512849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:20.667 [2024-11-18 12:10:46.512889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:42:20.667 [2024-11-18 12:10:46.513109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:20.667 [2024-11-18 12:10:46.513141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:42:20.667 [2024-11-18 12:10:46.513348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:20.667 [2024-11-18 12:10:46.513382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:42:20.667 [2024-11-18 12:10:46.513594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:20.667 [2024-11-18 12:10:46.513628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:42:20.667 passed 00:42:20.667 Test: blockdev nvme admin passthru ...passed 00:42:20.926 Test: blockdev copy ...passed 00:42:20.926 00:42:20.926 Run Summary: Type Total Ran Passed Failed Inactive 00:42:20.926 suites 1 1 n/a 0 0 00:42:20.926 tests 23 23 23 0 0 00:42:20.926 asserts 152 152 152 0 n/a 00:42:20.926 00:42:20.926 Elapsed time = 1.158 seconds 00:42:21.861 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:21.861 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.861 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:21.861 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.861 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:42:21.861 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:42:21.861 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:21.861 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:42:21.861 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:21.861 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:42:21.861 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:21.861 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:21.861 rmmod nvme_tcp 00:42:21.862 rmmod nvme_fabrics 00:42:21.862 rmmod nvme_keyring 00:42:21.862 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:21.862 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:42:21.862 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:42:21.862 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3194460 ']' 00:42:21.862 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3194460 00:42:21.862 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3194460 ']' 00:42:21.862 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3194460 00:42:21.862 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:42:21.862 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:21.862 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3194460 00:42:21.862 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:42:21.862 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:42:21.862 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3194460' 00:42:21.862 killing process with pid 3194460 00:42:21.862 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3194460 00:42:21.862 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3194460 00:42:23.239 12:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:23.239 12:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:23.239 12:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:23.239 12:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:42:23.239 12:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:42:23.239 12:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:23.239 12:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:42:23.239 12:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:23.239 12:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:23.239 12:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:23.239 12:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:23.239 12:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:25.155 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:25.155 00:42:25.155 real 0m9.465s 00:42:25.155 user 0m17.011s 00:42:25.155 sys 0m3.182s 00:42:25.155 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:25.155 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:25.155 ************************************ 00:42:25.155 END TEST nvmf_bdevio 00:42:25.155 ************************************ 00:42:25.155 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:42:25.155 00:42:25.155 real 4m28.763s 00:42:25.155 user 9m51.131s 00:42:25.155 sys 1m28.651s 00:42:25.155 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:25.155 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:25.155 ************************************ 00:42:25.155 END TEST nvmf_target_core_interrupt_mode 00:42:25.155 ************************************ 00:42:25.155 12:10:50 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:42:25.155 12:10:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:25.155 12:10:50 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:25.155 12:10:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:25.155 ************************************ 00:42:25.155 START TEST nvmf_interrupt 00:42:25.155 ************************************ 00:42:25.155 12:10:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:42:25.155 * Looking for test storage... 00:42:25.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:25.155 12:10:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:25.415 12:10:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:42:25.415 12:10:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:25.415 12:10:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:25.415 12:10:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:25.415 12:10:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:25.415 12:10:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:25.415 12:10:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:42:25.415 12:10:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:42:25.415 12:10:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:42:25.415 12:10:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:42:25.415 12:10:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:42:25.415 12:10:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:42:25.415 12:10:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:42:25.415 12:10:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:25.415 12:10:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:42:25.415 12:10:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:42:25.415 12:10:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:25.415 12:10:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:25.415 12:10:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:42:25.415 12:10:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:42:25.415 12:10:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:25.415 12:10:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:42:25.415 12:10:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:42:25.415 12:10:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:42:25.415 12:10:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:42:25.415 12:10:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:25.415 12:10:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:42:25.415 12:10:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:42:25.415 12:10:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:25.415 12:10:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:25.415 12:10:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:42:25.415 12:10:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:25.415 12:10:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:25.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:25.415 --rc genhtml_branch_coverage=1 00:42:25.415 --rc genhtml_function_coverage=1 00:42:25.415 --rc genhtml_legend=1 00:42:25.415 --rc geninfo_all_blocks=1 00:42:25.415 --rc geninfo_unexecuted_blocks=1 00:42:25.415 00:42:25.415 ' 00:42:25.415 12:10:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:25.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:25.415 --rc genhtml_branch_coverage=1 00:42:25.415 --rc genhtml_function_coverage=1 00:42:25.415 --rc genhtml_legend=1 00:42:25.415 --rc geninfo_all_blocks=1 00:42:25.415 --rc geninfo_unexecuted_blocks=1 00:42:25.415 00:42:25.415 ' 00:42:25.415 12:10:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:25.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:25.415 --rc genhtml_branch_coverage=1 00:42:25.415 --rc genhtml_function_coverage=1 00:42:25.415 --rc genhtml_legend=1 00:42:25.415 --rc geninfo_all_blocks=1 00:42:25.416 --rc geninfo_unexecuted_blocks=1 00:42:25.416 00:42:25.416 ' 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:25.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:25.416 --rc genhtml_branch_coverage=1 00:42:25.416 --rc genhtml_function_coverage=1 00:42:25.416 --rc genhtml_legend=1 00:42:25.416 --rc geninfo_all_blocks=1 00:42:25.416 --rc geninfo_unexecuted_blocks=1 00:42:25.416 00:42:25.416 ' 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:42:25.416 12:10:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:42:27.322 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:42:27.322 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:42:27.322 Found net devices under 0000:0a:00.0: cvl_0_0 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:42:27.322 Found net devices under 0000:0a:00.1: cvl_0_1 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:27.322 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:27.323 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:27.581 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:27.581 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:27.581 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:27.581 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:27.581 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:27.581 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:27.581 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:27.581 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:27.581 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:27.581 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:42:27.581 00:42:27.581 --- 10.0.0.2 ping statistics --- 00:42:27.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:27.581 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:42:27.581 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:27.581 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:27.581 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:42:27.581 00:42:27.581 --- 10.0.0.1 ping statistics --- 00:42:27.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:27.581 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:42:27.581 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:27.581 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:42:27.581 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:27.581 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:27.581 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:27.581 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:27.581 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:27.581 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:27.581 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:27.581 12:10:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:42:27.581 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:27.581 12:10:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:27.581 12:10:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:27.581 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3196960 00:42:27.581 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:42:27.581 12:10:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3196960 00:42:27.581 12:10:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 3196960 ']' 00:42:27.581 12:10:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:27.581 12:10:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:27.581 12:10:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:27.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:27.581 12:10:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:27.581 12:10:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:27.581 [2024-11-18 12:10:53.388725] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:27.581 [2024-11-18 12:10:53.391165] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:42:27.581 [2024-11-18 12:10:53.391275] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:27.841 [2024-11-18 12:10:53.535441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:27.841 [2024-11-18 12:10:53.669279] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:27.841 [2024-11-18 12:10:53.669378] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:27.841 [2024-11-18 12:10:53.669407] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:27.841 [2024-11-18 12:10:53.669429] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:27.841 [2024-11-18 12:10:53.669470] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:27.841 [2024-11-18 12:10:53.672129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:27.841 [2024-11-18 12:10:53.672136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:28.410 [2024-11-18 12:10:54.044113] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:28.410 [2024-11-18 12:10:54.044843] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:28.410 [2024-11-18 12:10:54.045189] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:42:28.670 5000+0 records in 00:42:28.670 5000+0 records out 00:42:28.670 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0143294 s, 715 MB/s 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:28.670 AIO0 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:28.670 [2024-11-18 12:10:54.493233] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:28.670 [2024-11-18 12:10:54.521514] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3196960 0 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3196960 0 idle 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3196960 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3196960 -w 256 00:42:28.670 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:28.931 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3196960 root 20 0 20.1t 195840 100992 S 0.0 0.3 0:00.75 reactor_0' 00:42:28.931 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3196960 root 20 0 20.1t 195840 100992 S 0.0 0.3 0:00.75 reactor_0 00:42:28.931 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:28.931 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:28.931 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:28.931 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:28.931 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:28.931 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:28.931 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:28.931 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:28.931 12:10:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:42:28.931 12:10:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3196960 1 00:42:28.931 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3196960 1 idle 00:42:28.931 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3196960 00:42:28.931 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:28.931 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:28.931 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:28.931 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:28.931 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:28.931 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:28.931 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:28.931 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:28.931 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:28.931 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3196960 -w 256 00:42:28.931 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:29.190 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3196969 root 20 0 20.1t 195840 100992 S 0.0 0.3 0:00.00 reactor_1' 00:42:29.190 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3196969 root 20 0 20.1t 195840 100992 S 0.0 0.3 0:00.00 reactor_1 00:42:29.190 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:29.190 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:29.191 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:29.191 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:29.191 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:29.191 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:29.191 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:29.191 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:29.191 12:10:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:42:29.191 12:10:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3197145 00:42:29.191 12:10:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:29.191 12:10:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:42:29.191 12:10:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:42:29.191 12:10:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3196960 0 00:42:29.191 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3196960 0 busy 00:42:29.191 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3196960 00:42:29.191 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:29.191 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:42:29.191 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:42:29.191 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:29.191 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:42:29.191 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:29.191 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:29.191 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:29.191 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3196960 -w 256 00:42:29.191 12:10:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:29.191 12:10:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3196960 root 20 0 20.1t 202368 102144 R 60.0 0.3 0:00.84 reactor_0' 00:42:29.191 12:10:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3196960 root 20 0 20.1t 202368 102144 R 60.0 0.3 0:00.84 reactor_0 00:42:29.191 12:10:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:29.191 12:10:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:29.191 12:10:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=60.0 00:42:29.191 12:10:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=60 00:42:29.191 12:10:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:42:29.191 12:10:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:42:29.191 12:10:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:42:29.191 12:10:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:29.191 12:10:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:42:29.191 12:10:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:42:29.191 12:10:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3196960 1 00:42:29.191 12:10:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3196960 1 busy 00:42:29.191 12:10:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3196960 00:42:29.191 12:10:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:29.191 12:10:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:42:29.191 12:10:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:42:29.191 12:10:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:29.191 12:10:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:42:29.191 12:10:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:29.191 12:10:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:29.191 12:10:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:29.191 12:10:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3196960 -w 256 00:42:29.191 12:10:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:29.451 12:10:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3196969 root 20 0 20.1t 206976 102144 R 99.9 0.3 0:00.22 reactor_1' 00:42:29.451 12:10:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3196969 root 20 0 20.1t 206976 102144 R 99.9 0.3 0:00.22 reactor_1 00:42:29.451 12:10:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:29.451 12:10:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:29.451 12:10:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:42:29.451 12:10:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:42:29.451 12:10:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:42:29.451 12:10:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:42:29.451 12:10:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:42:29.451 12:10:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:29.451 12:10:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3197145 00:42:39.500 Initializing NVMe Controllers 00:42:39.500 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:42:39.500 Controller IO queue size 256, less than required. 00:42:39.500 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:42:39.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:42:39.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:42:39.500 Initialization complete. Launching workers. 00:42:39.500 ======================================================== 00:42:39.500 Latency(us) 00:42:39.500 Device Information : IOPS MiB/s Average min max 00:42:39.500 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 10589.14 41.36 24199.30 6945.10 64318.61 00:42:39.500 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 10759.14 42.03 23812.91 6673.52 29414.26 00:42:39.500 ======================================================== 00:42:39.500 Total : 21348.28 83.39 24004.57 6673.52 64318.61 00:42:39.500 00:42:39.500 12:11:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:42:39.500 12:11:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3196960 0 00:42:39.500 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3196960 0 idle 00:42:39.500 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3196960 00:42:39.500 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:39.500 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:39.500 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:39.500 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:39.500 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:39.500 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:39.500 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:39.500 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:39.500 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:39.500 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3196960 -w 256 00:42:39.500 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:39.500 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3196960 root 20 0 20.1t 209664 102144 S 0.0 0.3 0:20.70 reactor_0' 00:42:39.500 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3196960 root 20 0 20.1t 209664 102144 S 0.0 0.3 0:20.70 reactor_0 00:42:39.500 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:39.500 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:39.500 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:39.500 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:39.500 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:39.500 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:39.500 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:39.500 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:39.500 12:11:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:42:39.500 12:11:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3196960 1 00:42:39.500 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3196960 1 idle 00:42:39.500 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3196960 00:42:39.500 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:39.500 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:39.501 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:39.501 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:39.501 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:39.501 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:39.501 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:39.501 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:39.501 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:39.501 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3196960 -w 256 00:42:39.501 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:39.759 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3196969 root 20 0 20.1t 209664 102144 S 0.0 0.3 0:09.99 reactor_1' 00:42:39.759 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3196969 root 20 0 20.1t 209664 102144 S 0.0 0.3 0:09.99 reactor_1 00:42:39.759 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:39.759 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:39.759 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:39.759 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:39.759 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:39.759 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:39.759 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:39.759 12:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:39.759 12:11:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:42:40.018 12:11:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:42:40.018 12:11:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:42:40.018 12:11:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:42:40.018 12:11:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:42:40.018 12:11:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3196960 0 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3196960 0 idle 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3196960 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3196960 -w 256 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3196960 root 20 0 20.1t 236928 111360 S 0.0 0.4 0:20.90 reactor_0' 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3196960 root 20 0 20.1t 236928 111360 S 0.0 0.4 0:20.90 reactor_0 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3196960 1 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3196960 1 idle 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3196960 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3196960 -w 256 00:42:42.554 12:11:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:42.554 12:11:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3196969 root 20 0 20.1t 236928 111360 S 0.0 0.4 0:10.05 reactor_1' 00:42:42.554 12:11:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3196969 root 20 0 20.1t 236928 111360 S 0.0 0.4 0:10.05 reactor_1 00:42:42.554 12:11:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:42.554 12:11:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:42.554 12:11:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:42.554 12:11:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:42.554 12:11:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:42.554 12:11:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:42.554 12:11:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:42.554 12:11:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:42.554 12:11:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:42:42.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:42:42.554 12:11:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:42:42.554 12:11:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:42:42.554 12:11:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:42:42.554 12:11:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:42.813 12:11:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:42:42.813 12:11:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:42.813 12:11:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:42:42.813 12:11:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:42:42.813 12:11:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:42:42.813 12:11:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:42.813 12:11:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:42:42.813 12:11:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:42.813 12:11:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:42:42.813 12:11:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:42.813 12:11:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:42.813 rmmod nvme_tcp 00:42:42.813 rmmod nvme_fabrics 00:42:42.813 rmmod nvme_keyring 00:42:42.813 12:11:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:42.813 12:11:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:42:42.813 12:11:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:42:42.813 12:11:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3196960 ']' 00:42:42.813 12:11:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3196960 00:42:42.813 12:11:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 3196960 ']' 00:42:42.813 12:11:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 3196960 00:42:42.813 12:11:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:42:42.813 12:11:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:42.813 12:11:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3196960 00:42:42.814 12:11:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:42.814 12:11:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:42.814 12:11:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3196960' 00:42:42.814 killing process with pid 3196960 00:42:42.814 12:11:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 3196960 00:42:42.814 12:11:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 3196960 00:42:44.217 12:11:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:44.217 12:11:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:44.217 12:11:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:44.217 12:11:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:42:44.217 12:11:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:42:44.217 12:11:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:44.217 12:11:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:42:44.217 12:11:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:44.217 12:11:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:44.217 12:11:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:44.217 12:11:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:44.217 12:11:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:46.122 12:11:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:46.122 00:42:46.122 real 0m20.807s 00:42:46.122 user 0m39.664s 00:42:46.122 sys 0m6.290s 00:42:46.122 12:11:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:46.122 12:11:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:46.122 ************************************ 00:42:46.122 END TEST nvmf_interrupt 00:42:46.122 ************************************ 00:42:46.122 00:42:46.122 real 35m41.389s 00:42:46.122 user 93m40.004s 00:42:46.122 sys 7m53.193s 00:42:46.122 12:11:11 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:46.122 12:11:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:46.122 ************************************ 00:42:46.122 END TEST nvmf_tcp 00:42:46.122 ************************************ 00:42:46.122 12:11:11 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:42:46.122 12:11:11 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:42:46.122 12:11:11 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:46.122 12:11:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:46.122 12:11:11 -- common/autotest_common.sh@10 -- # set +x 00:42:46.122 ************************************ 00:42:46.122 START TEST spdkcli_nvmf_tcp 00:42:46.122 ************************************ 00:42:46.122 12:11:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:42:46.122 * Looking for test storage... 00:42:46.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:42:46.122 12:11:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:46.122 12:11:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:42:46.122 12:11:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:46.380 12:11:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:46.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:46.381 --rc genhtml_branch_coverage=1 00:42:46.381 --rc genhtml_function_coverage=1 00:42:46.381 --rc genhtml_legend=1 00:42:46.381 --rc geninfo_all_blocks=1 00:42:46.381 --rc geninfo_unexecuted_blocks=1 00:42:46.381 00:42:46.381 ' 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:46.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:46.381 --rc genhtml_branch_coverage=1 00:42:46.381 --rc genhtml_function_coverage=1 00:42:46.381 --rc genhtml_legend=1 00:42:46.381 --rc geninfo_all_blocks=1 00:42:46.381 --rc geninfo_unexecuted_blocks=1 00:42:46.381 00:42:46.381 ' 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:46.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:46.381 --rc genhtml_branch_coverage=1 00:42:46.381 --rc genhtml_function_coverage=1 00:42:46.381 --rc genhtml_legend=1 00:42:46.381 --rc geninfo_all_blocks=1 00:42:46.381 --rc geninfo_unexecuted_blocks=1 00:42:46.381 00:42:46.381 ' 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:46.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:46.381 --rc genhtml_branch_coverage=1 00:42:46.381 --rc genhtml_function_coverage=1 00:42:46.381 --rc genhtml_legend=1 00:42:46.381 --rc geninfo_all_blocks=1 00:42:46.381 --rc geninfo_unexecuted_blocks=1 00:42:46.381 00:42:46.381 ' 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:46.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3199274 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3199274 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 3199274 ']' 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:46.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:46.381 12:11:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:46.381 [2024-11-18 12:11:12.131357] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:42:46.382 [2024-11-18 12:11:12.131532] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3199274 ] 00:42:46.640 [2024-11-18 12:11:12.284349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:46.640 [2024-11-18 12:11:12.423565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:46.640 [2024-11-18 12:11:12.423568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:47.576 12:11:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:47.576 12:11:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:42:47.576 12:11:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:42:47.576 12:11:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:47.576 12:11:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:47.576 12:11:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:42:47.576 12:11:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:42:47.576 12:11:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:42:47.576 12:11:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:47.576 12:11:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:47.576 12:11:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:42:47.576 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:42:47.576 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:42:47.576 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:42:47.576 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:42:47.576 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:42:47.576 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:42:47.576 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:42:47.576 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:42:47.576 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:42:47.576 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:47.576 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:47.576 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:42:47.576 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:47.576 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:47.576 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:42:47.576 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:47.576 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:42:47.576 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:42:47.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:47.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:42:47.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:42:47.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:42:47.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:42:47.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:47.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:42:47.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:42:47.577 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:42:47.577 ' 00:42:50.115 [2024-11-18 12:11:15.986134] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:51.495 [2024-11-18 12:11:17.256057] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:42:54.028 [2024-11-18 12:11:19.599741] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:42:55.939 [2024-11-18 12:11:21.638305] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:42:57.316 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:42:57.316 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:42:57.316 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:42:57.316 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:42:57.316 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:42:57.316 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:42:57.316 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:42:57.316 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:42:57.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:42:57.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:42:57.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:57.316 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:57.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:42:57.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:57.316 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:57.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:42:57.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:57.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:42:57.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:42:57.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:57.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:42:57.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:42:57.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:42:57.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:42:57.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:57.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:42:57.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:42:57.316 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:42:57.575 12:11:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:42:57.575 12:11:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:57.575 12:11:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:57.575 12:11:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:42:57.575 12:11:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:57.575 12:11:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:57.575 12:11:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:42:57.575 12:11:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:42:58.142 12:11:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:42:58.142 12:11:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:42:58.142 12:11:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:42:58.142 12:11:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:58.142 12:11:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:58.142 12:11:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:42:58.142 12:11:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:58.142 12:11:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:58.142 12:11:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:42:58.142 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:42:58.142 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:42:58.142 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:42:58.142 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:42:58.142 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:42:58.142 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:42:58.142 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:42:58.142 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:42:58.142 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:42:58.142 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:42:58.142 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:42:58.142 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:42:58.142 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:42:58.142 ' 00:43:04.715 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:43:04.715 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:43:04.715 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:43:04.715 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:43:04.715 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:43:04.715 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:43:04.715 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:43:04.715 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:43:04.715 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:43:04.715 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:43:04.715 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:43:04.715 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:43:04.715 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:43:04.715 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:43:04.715 12:11:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:43:04.715 12:11:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:04.715 12:11:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:04.715 12:11:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3199274 00:43:04.715 12:11:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3199274 ']' 00:43:04.715 12:11:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3199274 00:43:04.715 12:11:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:43:04.715 12:11:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:04.715 12:11:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3199274 00:43:04.715 12:11:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:04.715 12:11:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:04.715 12:11:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3199274' 00:43:04.715 killing process with pid 3199274 00:43:04.715 12:11:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 3199274 00:43:04.716 12:11:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 3199274 00:43:05.283 12:11:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:43:05.283 12:11:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:43:05.283 12:11:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3199274 ']' 00:43:05.283 12:11:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3199274 00:43:05.283 12:11:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3199274 ']' 00:43:05.283 12:11:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3199274 00:43:05.283 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3199274) - No such process 00:43:05.283 12:11:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 3199274 is not found' 00:43:05.283 Process with pid 3199274 is not found 00:43:05.283 12:11:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:43:05.283 12:11:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:43:05.283 12:11:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:43:05.283 00:43:05.283 real 0m19.039s 00:43:05.283 user 0m39.924s 00:43:05.283 sys 0m0.999s 00:43:05.283 12:11:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:05.283 12:11:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:05.283 ************************************ 00:43:05.283 END TEST spdkcli_nvmf_tcp 00:43:05.283 ************************************ 00:43:05.283 12:11:30 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:43:05.283 12:11:30 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:43:05.283 12:11:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:05.283 12:11:30 -- common/autotest_common.sh@10 -- # set +x 00:43:05.283 ************************************ 00:43:05.283 START TEST nvmf_identify_passthru 00:43:05.283 ************************************ 00:43:05.283 12:11:30 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:43:05.283 * Looking for test storage... 00:43:05.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:05.283 12:11:31 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:43:05.283 12:11:31 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:43:05.283 12:11:31 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:43:05.283 12:11:31 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:43:05.283 12:11:31 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:05.283 12:11:31 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:05.283 12:11:31 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:05.283 12:11:31 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:43:05.283 12:11:31 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:43:05.283 12:11:31 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:43:05.283 12:11:31 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:43:05.283 12:11:31 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:43:05.283 12:11:31 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:43:05.283 12:11:31 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:43:05.283 12:11:31 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:05.283 12:11:31 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:43:05.284 12:11:31 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:43:05.284 12:11:31 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:05.284 12:11:31 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:05.284 12:11:31 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:43:05.284 12:11:31 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:43:05.284 12:11:31 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:05.284 12:11:31 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:43:05.284 12:11:31 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:43:05.284 12:11:31 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:43:05.284 12:11:31 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:43:05.284 12:11:31 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:05.284 12:11:31 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:43:05.284 12:11:31 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:43:05.284 12:11:31 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:05.284 12:11:31 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:05.284 12:11:31 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:43:05.284 12:11:31 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:05.284 12:11:31 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:43:05.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:05.284 --rc genhtml_branch_coverage=1 00:43:05.284 --rc genhtml_function_coverage=1 00:43:05.284 --rc genhtml_legend=1 00:43:05.284 --rc geninfo_all_blocks=1 00:43:05.284 --rc geninfo_unexecuted_blocks=1 00:43:05.284 00:43:05.284 ' 00:43:05.284 12:11:31 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:43:05.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:05.284 --rc genhtml_branch_coverage=1 00:43:05.284 --rc genhtml_function_coverage=1 00:43:05.284 --rc genhtml_legend=1 00:43:05.284 --rc geninfo_all_blocks=1 00:43:05.284 --rc geninfo_unexecuted_blocks=1 00:43:05.284 00:43:05.284 ' 00:43:05.284 12:11:31 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:43:05.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:05.284 --rc genhtml_branch_coverage=1 00:43:05.284 --rc genhtml_function_coverage=1 00:43:05.284 --rc genhtml_legend=1 00:43:05.284 --rc geninfo_all_blocks=1 00:43:05.284 --rc geninfo_unexecuted_blocks=1 00:43:05.284 00:43:05.284 ' 00:43:05.284 12:11:31 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:43:05.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:05.284 --rc genhtml_branch_coverage=1 00:43:05.284 --rc genhtml_function_coverage=1 00:43:05.284 --rc genhtml_legend=1 00:43:05.284 --rc geninfo_all_blocks=1 00:43:05.284 --rc geninfo_unexecuted_blocks=1 00:43:05.284 00:43:05.284 ' 00:43:05.284 12:11:31 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:05.284 12:11:31 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:43:05.284 12:11:31 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:05.284 12:11:31 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:05.284 12:11:31 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:05.284 12:11:31 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:05.284 12:11:31 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:05.284 12:11:31 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:05.284 12:11:31 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:05.284 12:11:31 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:05.284 12:11:31 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:05.284 12:11:31 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:05.284 12:11:31 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:05.284 12:11:31 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:43:05.284 12:11:31 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:05.284 12:11:31 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:05.284 12:11:31 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:05.284 12:11:31 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:05.284 12:11:31 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:05.284 12:11:31 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:43:05.284 12:11:31 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:05.284 12:11:31 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:05.284 12:11:31 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:05.284 12:11:31 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:05.284 12:11:31 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:05.284 12:11:31 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:05.284 12:11:31 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:43:05.284 12:11:31 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:05.284 12:11:31 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:43:05.284 12:11:31 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:05.284 12:11:31 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:05.284 12:11:31 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:05.284 12:11:31 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:05.284 12:11:31 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:05.284 12:11:31 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:05.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:05.284 12:11:31 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:05.284 12:11:31 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:05.284 12:11:31 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:05.284 12:11:31 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:05.284 12:11:31 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:43:05.284 12:11:31 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:05.284 12:11:31 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:05.284 12:11:31 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:05.284 12:11:31 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:05.285 12:11:31 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:05.285 12:11:31 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:05.285 12:11:31 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:43:05.285 12:11:31 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:05.285 12:11:31 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:43:05.285 12:11:31 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:05.285 12:11:31 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:05.285 12:11:31 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:05.285 12:11:31 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:05.285 12:11:31 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:05.285 12:11:31 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:05.285 12:11:31 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:05.285 12:11:31 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:05.285 12:11:31 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:05.285 12:11:31 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:05.285 12:11:31 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:43:05.285 12:11:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:07.191 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:07.191 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:43:07.191 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:07.191 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:07.191 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:07.191 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:07.191 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:07.191 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:43:07.191 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:07.191 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:43:07.191 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:43:07.191 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:43:07.191 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:43:07.191 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:43:07.191 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:43:07.191 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:43:07.192 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:43:07.192 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:43:07.192 Found net devices under 0000:0a:00.0: cvl_0_0 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:43:07.192 Found net devices under 0000:0a:00.1: cvl_0_1 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:07.192 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:07.451 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:07.451 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:07.451 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:07.451 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:07.451 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:07.451 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:07.451 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:07.451 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:07.451 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:07.451 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:43:07.451 00:43:07.451 --- 10.0.0.2 ping statistics --- 00:43:07.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:07.451 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:43:07.451 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:07.451 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:07.451 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:43:07.451 00:43:07.451 --- 10.0.0.1 ping statistics --- 00:43:07.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:07.451 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:43:07.451 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:07.451 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:43:07.451 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:07.451 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:07.451 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:07.451 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:07.451 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:07.451 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:07.451 12:11:33 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:07.451 12:11:33 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:43:07.451 12:11:33 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:07.451 12:11:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:07.451 12:11:33 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:43:07.451 12:11:33 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:43:07.451 12:11:33 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:43:07.451 12:11:33 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:43:07.451 12:11:33 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:43:07.451 12:11:33 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:43:07.451 12:11:33 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:43:07.451 12:11:33 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:43:07.451 12:11:33 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:43:07.451 12:11:33 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:43:07.710 12:11:33 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:43:07.710 12:11:33 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:43:07.710 12:11:33 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:43:07.710 12:11:33 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:43:07.710 12:11:33 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:43:07.710 12:11:33 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:43:07.710 12:11:33 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:43:07.710 12:11:33 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:43:11.909 12:11:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:43:11.909 12:11:37 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:43:11.909 12:11:37 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:43:11.909 12:11:37 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:43:17.253 12:11:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:43:17.253 12:11:42 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:43:17.253 12:11:42 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:17.253 12:11:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:17.254 12:11:42 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:43:17.254 12:11:42 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:17.254 12:11:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:17.254 12:11:42 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3204274 00:43:17.254 12:11:42 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:43:17.254 12:11:42 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:43:17.254 12:11:42 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3204274 00:43:17.254 12:11:42 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 3204274 ']' 00:43:17.254 12:11:42 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:17.254 12:11:42 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:17.254 12:11:42 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:17.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:17.254 12:11:42 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:17.254 12:11:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:17.254 [2024-11-18 12:11:42.234313] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:43:17.254 [2024-11-18 12:11:42.234447] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:17.254 [2024-11-18 12:11:42.379663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:17.254 [2024-11-18 12:11:42.517045] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:17.254 [2024-11-18 12:11:42.517117] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:17.254 [2024-11-18 12:11:42.517142] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:17.254 [2024-11-18 12:11:42.517168] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:17.254 [2024-11-18 12:11:42.517187] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:17.254 [2024-11-18 12:11:42.519960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:17.254 [2024-11-18 12:11:42.520031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:17.254 [2024-11-18 12:11:42.520130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:17.254 [2024-11-18 12:11:42.520136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:43:17.512 12:11:43 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:17.512 12:11:43 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:43:17.512 12:11:43 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:43:17.512 12:11:43 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:17.512 12:11:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:17.512 INFO: Log level set to 20 00:43:17.512 INFO: Requests: 00:43:17.512 { 00:43:17.512 "jsonrpc": "2.0", 00:43:17.512 "method": "nvmf_set_config", 00:43:17.512 "id": 1, 00:43:17.512 "params": { 00:43:17.512 "admin_cmd_passthru": { 00:43:17.512 "identify_ctrlr": true 00:43:17.512 } 00:43:17.512 } 00:43:17.512 } 00:43:17.512 00:43:17.512 INFO: response: 00:43:17.512 { 00:43:17.512 "jsonrpc": "2.0", 00:43:17.512 "id": 1, 00:43:17.512 "result": true 00:43:17.512 } 00:43:17.512 00:43:17.512 12:11:43 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:17.512 12:11:43 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:43:17.512 12:11:43 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:17.512 12:11:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:17.512 INFO: Setting log level to 20 00:43:17.512 INFO: Setting log level to 20 00:43:17.512 INFO: Log level set to 20 00:43:17.512 INFO: Log level set to 20 00:43:17.512 INFO: Requests: 00:43:17.512 { 00:43:17.512 "jsonrpc": "2.0", 00:43:17.512 "method": "framework_start_init", 00:43:17.512 "id": 1 00:43:17.512 } 00:43:17.512 00:43:17.512 INFO: Requests: 00:43:17.512 { 00:43:17.512 "jsonrpc": "2.0", 00:43:17.512 "method": "framework_start_init", 00:43:17.512 "id": 1 00:43:17.512 } 00:43:17.512 00:43:17.771 [2024-11-18 12:11:43.532408] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:43:17.771 INFO: response: 00:43:17.771 { 00:43:17.771 "jsonrpc": "2.0", 00:43:17.771 "id": 1, 00:43:17.771 "result": true 00:43:17.771 } 00:43:17.771 00:43:17.771 INFO: response: 00:43:17.771 { 00:43:17.771 "jsonrpc": "2.0", 00:43:17.771 "id": 1, 00:43:17.771 "result": true 00:43:17.771 } 00:43:17.771 00:43:17.771 12:11:43 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:17.771 12:11:43 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:17.771 12:11:43 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:17.771 12:11:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:17.771 INFO: Setting log level to 40 00:43:17.771 INFO: Setting log level to 40 00:43:17.771 INFO: Setting log level to 40 00:43:17.771 [2024-11-18 12:11:43.545308] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:17.771 12:11:43 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:17.771 12:11:43 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:43:17.771 12:11:43 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:17.771 12:11:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:17.771 12:11:43 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:43:17.771 12:11:43 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:17.771 12:11:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:21.067 Nvme0n1 00:43:21.067 12:11:46 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.067 12:11:46 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:43:21.067 12:11:46 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.067 12:11:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:21.067 12:11:46 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.067 12:11:46 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:43:21.067 12:11:46 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.067 12:11:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:21.067 12:11:46 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.067 12:11:46 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:21.067 12:11:46 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.067 12:11:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:21.067 [2024-11-18 12:11:46.500854] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:21.067 12:11:46 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.067 12:11:46 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:43:21.067 12:11:46 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.067 12:11:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:21.067 [ 00:43:21.067 { 00:43:21.067 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:43:21.067 "subtype": "Discovery", 00:43:21.067 "listen_addresses": [], 00:43:21.067 "allow_any_host": true, 00:43:21.067 "hosts": [] 00:43:21.067 }, 00:43:21.067 { 00:43:21.067 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:43:21.067 "subtype": "NVMe", 00:43:21.067 "listen_addresses": [ 00:43:21.067 { 00:43:21.067 "trtype": "TCP", 00:43:21.067 "adrfam": "IPv4", 00:43:21.067 "traddr": "10.0.0.2", 00:43:21.067 "trsvcid": "4420" 00:43:21.067 } 00:43:21.067 ], 00:43:21.067 "allow_any_host": true, 00:43:21.067 "hosts": [], 00:43:21.067 "serial_number": "SPDK00000000000001", 00:43:21.067 "model_number": "SPDK bdev Controller", 00:43:21.067 "max_namespaces": 1, 00:43:21.067 "min_cntlid": 1, 00:43:21.067 "max_cntlid": 65519, 00:43:21.067 "namespaces": [ 00:43:21.067 { 00:43:21.067 "nsid": 1, 00:43:21.067 "bdev_name": "Nvme0n1", 00:43:21.067 "name": "Nvme0n1", 00:43:21.067 "nguid": "E7C3ABDE41E6431AA30C17652DF668A8", 00:43:21.067 "uuid": "e7c3abde-41e6-431a-a30c-17652df668a8" 00:43:21.067 } 00:43:21.067 ] 00:43:21.067 } 00:43:21.067 ] 00:43:21.067 12:11:46 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.067 12:11:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:43:21.067 12:11:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:43:21.067 12:11:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:43:21.067 12:11:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:43:21.067 12:11:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:43:21.067 12:11:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:43:21.067 12:11:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:43:21.634 12:11:47 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:43:21.634 12:11:47 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:43:21.634 12:11:47 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:43:21.634 12:11:47 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:21.634 12:11:47 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.634 12:11:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:21.634 12:11:47 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.634 12:11:47 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:43:21.634 12:11:47 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:43:21.634 12:11:47 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:21.634 12:11:47 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:43:21.634 12:11:47 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:21.634 12:11:47 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:43:21.634 12:11:47 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:21.634 12:11:47 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:21.634 rmmod nvme_tcp 00:43:21.634 rmmod nvme_fabrics 00:43:21.634 rmmod nvme_keyring 00:43:21.634 12:11:47 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:21.634 12:11:47 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:43:21.634 12:11:47 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:43:21.634 12:11:47 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3204274 ']' 00:43:21.634 12:11:47 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3204274 00:43:21.634 12:11:47 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 3204274 ']' 00:43:21.634 12:11:47 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 3204274 00:43:21.634 12:11:47 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:43:21.634 12:11:47 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:21.634 12:11:47 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3204274 00:43:21.634 12:11:47 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:21.634 12:11:47 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:21.634 12:11:47 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3204274' 00:43:21.634 killing process with pid 3204274 00:43:21.634 12:11:47 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 3204274 00:43:21.634 12:11:47 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 3204274 00:43:24.165 12:11:49 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:24.165 12:11:49 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:24.165 12:11:49 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:24.165 12:11:49 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:43:24.165 12:11:49 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:43:24.165 12:11:49 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:24.165 12:11:49 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:43:24.166 12:11:49 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:24.166 12:11:49 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:24.166 12:11:49 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:24.166 12:11:49 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:24.166 12:11:49 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:26.072 12:11:51 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:26.072 00:43:26.072 real 0m20.858s 00:43:26.072 user 0m34.044s 00:43:26.072 sys 0m3.526s 00:43:26.072 12:11:51 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:26.072 12:11:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:26.072 ************************************ 00:43:26.072 END TEST nvmf_identify_passthru 00:43:26.072 ************************************ 00:43:26.072 12:11:51 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:43:26.072 12:11:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:26.072 12:11:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:26.072 12:11:51 -- common/autotest_common.sh@10 -- # set +x 00:43:26.072 ************************************ 00:43:26.072 START TEST nvmf_dif 00:43:26.072 ************************************ 00:43:26.072 12:11:51 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:43:26.072 * Looking for test storage... 00:43:26.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:26.072 12:11:51 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:43:26.072 12:11:51 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:43:26.072 12:11:51 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:43:26.333 12:11:52 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:43:26.333 12:11:52 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:26.333 12:11:52 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:26.333 12:11:52 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:26.333 12:11:52 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:43:26.333 12:11:52 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:43:26.333 12:11:52 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:43:26.333 12:11:52 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:43:26.333 12:11:52 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:43:26.333 12:11:52 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:43:26.333 12:11:52 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:43:26.333 12:11:52 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:26.333 12:11:52 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:43:26.333 12:11:52 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:43:26.333 12:11:52 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:26.333 12:11:52 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:26.333 12:11:52 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:43:26.333 12:11:52 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:43:26.333 12:11:52 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:26.333 12:11:52 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:43:26.333 12:11:52 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:43:26.333 12:11:52 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:43:26.333 12:11:52 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:43:26.333 12:11:52 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:26.333 12:11:52 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:43:26.333 12:11:52 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:43:26.333 12:11:52 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:26.333 12:11:52 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:26.333 12:11:52 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:43:26.333 12:11:52 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:26.333 12:11:52 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:43:26.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:26.333 --rc genhtml_branch_coverage=1 00:43:26.333 --rc genhtml_function_coverage=1 00:43:26.333 --rc genhtml_legend=1 00:43:26.333 --rc geninfo_all_blocks=1 00:43:26.333 --rc geninfo_unexecuted_blocks=1 00:43:26.333 00:43:26.333 ' 00:43:26.333 12:11:52 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:43:26.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:26.333 --rc genhtml_branch_coverage=1 00:43:26.333 --rc genhtml_function_coverage=1 00:43:26.333 --rc genhtml_legend=1 00:43:26.333 --rc geninfo_all_blocks=1 00:43:26.333 --rc geninfo_unexecuted_blocks=1 00:43:26.333 00:43:26.333 ' 00:43:26.333 12:11:52 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:43:26.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:26.333 --rc genhtml_branch_coverage=1 00:43:26.333 --rc genhtml_function_coverage=1 00:43:26.333 --rc genhtml_legend=1 00:43:26.333 --rc geninfo_all_blocks=1 00:43:26.333 --rc geninfo_unexecuted_blocks=1 00:43:26.333 00:43:26.333 ' 00:43:26.333 12:11:52 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:43:26.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:26.333 --rc genhtml_branch_coverage=1 00:43:26.333 --rc genhtml_function_coverage=1 00:43:26.333 --rc genhtml_legend=1 00:43:26.333 --rc geninfo_all_blocks=1 00:43:26.333 --rc geninfo_unexecuted_blocks=1 00:43:26.333 00:43:26.333 ' 00:43:26.333 12:11:52 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:26.333 12:11:52 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:43:26.333 12:11:52 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:26.333 12:11:52 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:26.333 12:11:52 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:26.333 12:11:52 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:26.333 12:11:52 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:26.333 12:11:52 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:26.333 12:11:52 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:26.333 12:11:52 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:26.333 12:11:52 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:26.333 12:11:52 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:26.333 12:11:52 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:26.334 12:11:52 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:43:26.334 12:11:52 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:26.334 12:11:52 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:26.334 12:11:52 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:26.334 12:11:52 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:26.334 12:11:52 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:26.334 12:11:52 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:43:26.334 12:11:52 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:26.334 12:11:52 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:26.334 12:11:52 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:26.334 12:11:52 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:26.334 12:11:52 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:26.334 12:11:52 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:26.334 12:11:52 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:43:26.334 12:11:52 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:26.334 12:11:52 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:43:26.334 12:11:52 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:26.334 12:11:52 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:26.334 12:11:52 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:26.334 12:11:52 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:26.334 12:11:52 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:26.334 12:11:52 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:26.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:26.334 12:11:52 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:26.334 12:11:52 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:26.334 12:11:52 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:26.334 12:11:52 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:43:26.334 12:11:52 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:43:26.334 12:11:52 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:43:26.334 12:11:52 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:43:26.334 12:11:52 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:43:26.334 12:11:52 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:26.334 12:11:52 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:26.334 12:11:52 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:26.334 12:11:52 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:26.334 12:11:52 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:26.334 12:11:52 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:26.334 12:11:52 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:26.334 12:11:52 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:26.334 12:11:52 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:26.334 12:11:52 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:26.334 12:11:52 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:43:26.334 12:11:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:43:28.242 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:43:28.242 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:43:28.242 Found net devices under 0000:0a:00.0: cvl_0_0 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:28.242 12:11:54 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:43:28.243 Found net devices under 0000:0a:00.1: cvl_0_1 00:43:28.243 12:11:54 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:28.243 12:11:54 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:28.243 12:11:54 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:43:28.243 12:11:54 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:28.243 12:11:54 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:28.243 12:11:54 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:28.243 12:11:54 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:28.243 12:11:54 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:28.243 12:11:54 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:28.243 12:11:54 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:28.243 12:11:54 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:28.243 12:11:54 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:28.243 12:11:54 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:28.243 12:11:54 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:28.243 12:11:54 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:28.243 12:11:54 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:28.243 12:11:54 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:28.243 12:11:54 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:28.243 12:11:54 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:28.243 12:11:54 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:28.243 12:11:54 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:28.502 12:11:54 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:28.502 12:11:54 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:28.502 12:11:54 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:28.502 12:11:54 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:28.502 12:11:54 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:28.502 12:11:54 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:28.502 12:11:54 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:28.502 12:11:54 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:28.502 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:28.502 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:43:28.502 00:43:28.502 --- 10.0.0.2 ping statistics --- 00:43:28.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:28.502 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:43:28.502 12:11:54 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:28.502 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:28.502 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:43:28.502 00:43:28.502 --- 10.0.0.1 ping statistics --- 00:43:28.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:28.502 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:43:28.502 12:11:54 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:28.502 12:11:54 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:43:28.502 12:11:54 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:43:28.502 12:11:54 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:29.878 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:43:29.878 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:43:29.878 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:43:29.878 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:43:29.878 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:43:29.878 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:43:29.878 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:43:29.878 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:43:29.878 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:43:29.878 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:43:29.878 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:43:29.878 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:43:29.878 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:43:29.878 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:43:29.878 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:43:29.878 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:43:29.878 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:43:29.878 12:11:55 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:29.879 12:11:55 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:29.879 12:11:55 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:29.879 12:11:55 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:29.879 12:11:55 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:29.879 12:11:55 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:29.879 12:11:55 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:43:29.879 12:11:55 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:43:29.879 12:11:55 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:29.879 12:11:55 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:29.879 12:11:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:29.879 12:11:55 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3207698 00:43:29.879 12:11:55 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:43:29.879 12:11:55 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3207698 00:43:29.879 12:11:55 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 3207698 ']' 00:43:29.879 12:11:55 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:29.879 12:11:55 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:29.879 12:11:55 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:29.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:29.879 12:11:55 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:29.879 12:11:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:29.879 [2024-11-18 12:11:55.753057] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:43:29.879 [2024-11-18 12:11:55.753210] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:30.138 [2024-11-18 12:11:55.908940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:30.398 [2024-11-18 12:11:56.050706] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:30.398 [2024-11-18 12:11:56.050801] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:30.398 [2024-11-18 12:11:56.050827] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:30.398 [2024-11-18 12:11:56.050853] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:30.398 [2024-11-18 12:11:56.050873] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:30.398 [2024-11-18 12:11:56.052571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:30.965 12:11:56 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:30.965 12:11:56 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:43:30.965 12:11:56 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:30.965 12:11:56 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:30.965 12:11:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:30.965 12:11:56 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:30.965 12:11:56 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:43:30.965 12:11:56 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:43:30.965 12:11:56 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:30.965 12:11:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:30.965 [2024-11-18 12:11:56.785620] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:30.965 12:11:56 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:30.965 12:11:56 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:43:30.965 12:11:56 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:30.965 12:11:56 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:30.965 12:11:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:30.965 ************************************ 00:43:30.965 START TEST fio_dif_1_default 00:43:30.965 ************************************ 00:43:30.965 12:11:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:43:30.965 12:11:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:43:30.965 12:11:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:43:30.965 12:11:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:43:30.965 12:11:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:43:30.965 12:11:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:43:30.965 12:11:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:30.965 12:11:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:30.965 12:11:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:30.965 bdev_null0 00:43:30.966 12:11:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:30.966 12:11:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:30.966 12:11:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:30.966 12:11:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:30.966 12:11:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:30.966 12:11:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:30.966 12:11:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:30.966 12:11:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:30.966 12:11:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:30.966 12:11:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:30.966 12:11:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:30.966 12:11:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:30.966 [2024-11-18 12:11:56.845938] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:30.966 12:11:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:30.966 12:11:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:43:30.966 12:11:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:43:30.966 12:11:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:31.224 12:11:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:43:31.224 12:11:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:43:31.224 12:11:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:31.224 12:11:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:31.224 { 00:43:31.224 "params": { 00:43:31.224 "name": "Nvme$subsystem", 00:43:31.224 "trtype": "$TEST_TRANSPORT", 00:43:31.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:31.224 "adrfam": "ipv4", 00:43:31.224 "trsvcid": "$NVMF_PORT", 00:43:31.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:31.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:31.224 "hdgst": ${hdgst:-false}, 00:43:31.224 "ddgst": ${ddgst:-false} 00:43:31.224 }, 00:43:31.224 "method": "bdev_nvme_attach_controller" 00:43:31.224 } 00:43:31.224 EOF 00:43:31.224 )") 00:43:31.224 12:11:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:31.224 12:11:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:43:31.224 12:11:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:43:31.224 12:11:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:31.224 12:11:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:43:31.224 12:11:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:31.224 12:11:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:31.224 12:11:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:43:31.224 12:11:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:31.224 12:11:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:31.224 12:11:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:43:31.224 12:11:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:31.224 12:11:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:31.224 12:11:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:43:31.224 12:11:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:43:31.224 12:11:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:31.224 12:11:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:43:31.224 12:11:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:43:31.224 12:11:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:31.224 12:11:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:43:31.224 12:11:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:31.224 "params": { 00:43:31.224 "name": "Nvme0", 00:43:31.224 "trtype": "tcp", 00:43:31.224 "traddr": "10.0.0.2", 00:43:31.224 "adrfam": "ipv4", 00:43:31.224 "trsvcid": "4420", 00:43:31.224 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:31.224 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:31.224 "hdgst": false, 00:43:31.224 "ddgst": false 00:43:31.224 }, 00:43:31.224 "method": "bdev_nvme_attach_controller" 00:43:31.224 }' 00:43:31.224 12:11:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:31.224 12:11:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:31.224 12:11:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # break 00:43:31.224 12:11:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:31.224 12:11:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:31.484 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:31.484 fio-3.35 00:43:31.484 Starting 1 thread 00:43:43.750 00:43:43.750 filename0: (groupid=0, jobs=1): err= 0: pid=3208166: Mon Nov 18 12:12:08 2024 00:43:43.750 read: IOPS=188, BW=755KiB/s (773kB/s)(7552KiB/10007msec) 00:43:43.750 slat (nsec): min=4921, max=95049, avg=13832.91, stdev=6094.33 00:43:43.750 clat (usec): min=670, max=43469, avg=21158.13, stdev=20209.34 00:43:43.750 lat (usec): min=680, max=43499, avg=21171.96, stdev=20208.91 00:43:43.750 clat percentiles (usec): 00:43:43.750 | 1.00th=[ 701], 5.00th=[ 734], 10.00th=[ 750], 20.00th=[ 775], 00:43:43.750 | 30.00th=[ 791], 40.00th=[ 807], 50.00th=[41157], 60.00th=[41157], 00:43:43.750 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:43.750 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:43:43.750 | 99.99th=[43254] 00:43:43.750 bw ( KiB/s): min= 672, max= 768, per=99.78%, avg=753.60, stdev=30.22, samples=20 00:43:43.750 iops : min= 168, max= 192, avg=188.40, stdev= 7.56, samples=20 00:43:43.751 lat (usec) : 750=9.00%, 1000=40.41% 00:43:43.751 lat (msec) : 2=0.16%, 50=50.42% 00:43:43.751 cpu : usr=92.16%, sys=7.33%, ctx=15, majf=0, minf=1637 00:43:43.751 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:43.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:43.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:43.751 issued rwts: total=1888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:43.751 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:43.751 00:43:43.751 Run status group 0 (all jobs): 00:43:43.751 READ: bw=755KiB/s (773kB/s), 755KiB/s-755KiB/s (773kB/s-773kB/s), io=7552KiB (7733kB), run=10007-10007msec 00:43:43.751 ----------------------------------------------------- 00:43:43.751 Suppressions used: 00:43:43.751 count bytes template 00:43:43.751 1 8 /usr/src/fio/parse.c 00:43:43.751 1 8 libtcmalloc_minimal.so 00:43:43.751 1 904 libcrypto.so 00:43:43.751 ----------------------------------------------------- 00:43:43.751 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:43.751 00:43:43.751 real 0m12.551s 00:43:43.751 user 0m11.583s 00:43:43.751 sys 0m1.188s 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:43.751 ************************************ 00:43:43.751 END TEST fio_dif_1_default 00:43:43.751 ************************************ 00:43:43.751 12:12:09 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:43:43.751 12:12:09 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:43.751 12:12:09 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:43.751 12:12:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:43.751 ************************************ 00:43:43.751 START TEST fio_dif_1_multi_subsystems 00:43:43.751 ************************************ 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:43.751 bdev_null0 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:43.751 [2024-11-18 12:12:09.444337] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:43.751 bdev_null1 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:43.751 { 00:43:43.751 "params": { 00:43:43.751 "name": "Nvme$subsystem", 00:43:43.751 "trtype": "$TEST_TRANSPORT", 00:43:43.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:43.751 "adrfam": "ipv4", 00:43:43.751 "trsvcid": "$NVMF_PORT", 00:43:43.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:43.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:43.751 "hdgst": ${hdgst:-false}, 00:43:43.751 "ddgst": ${ddgst:-false} 00:43:43.751 }, 00:43:43.751 "method": "bdev_nvme_attach_controller" 00:43:43.751 } 00:43:43.751 EOF 00:43:43.751 )") 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:43:43.751 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:43.752 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:43.752 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:43.752 { 00:43:43.752 "params": { 00:43:43.752 "name": "Nvme$subsystem", 00:43:43.752 "trtype": "$TEST_TRANSPORT", 00:43:43.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:43.752 "adrfam": "ipv4", 00:43:43.752 "trsvcid": "$NVMF_PORT", 00:43:43.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:43.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:43.752 "hdgst": ${hdgst:-false}, 00:43:43.752 "ddgst": ${ddgst:-false} 00:43:43.752 }, 00:43:43.752 "method": "bdev_nvme_attach_controller" 00:43:43.752 } 00:43:43.752 EOF 00:43:43.752 )") 00:43:43.752 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:43:43.752 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:43:43.752 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:43:43.752 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:43:43.752 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:43:43.752 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:43.752 "params": { 00:43:43.752 "name": "Nvme0", 00:43:43.752 "trtype": "tcp", 00:43:43.752 "traddr": "10.0.0.2", 00:43:43.752 "adrfam": "ipv4", 00:43:43.752 "trsvcid": "4420", 00:43:43.752 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:43.752 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:43.752 "hdgst": false, 00:43:43.752 "ddgst": false 00:43:43.752 }, 00:43:43.752 "method": "bdev_nvme_attach_controller" 00:43:43.752 },{ 00:43:43.752 "params": { 00:43:43.752 "name": "Nvme1", 00:43:43.752 "trtype": "tcp", 00:43:43.752 "traddr": "10.0.0.2", 00:43:43.752 "adrfam": "ipv4", 00:43:43.752 "trsvcid": "4420", 00:43:43.752 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:43.752 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:43.752 "hdgst": false, 00:43:43.752 "ddgst": false 00:43:43.752 }, 00:43:43.752 "method": "bdev_nvme_attach_controller" 00:43:43.752 }' 00:43:43.752 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:43.752 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:43.752 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # break 00:43:43.752 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:43.752 12:12:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:44.010 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:44.010 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:44.010 fio-3.35 00:43:44.010 Starting 2 threads 00:43:56.209 00:43:56.209 filename0: (groupid=0, jobs=1): err= 0: pid=3210318: Mon Nov 18 12:12:20 2024 00:43:56.209 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10016msec) 00:43:56.209 slat (nsec): min=4912, max=57467, avg=14431.83, stdev=5268.27 00:43:56.209 clat (usec): min=40862, max=46591, avg=41004.59, stdev=394.67 00:43:56.209 lat (usec): min=40875, max=46605, avg=41019.02, stdev=394.76 00:43:56.209 clat percentiles (usec): 00:43:56.209 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:43:56.209 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:56.209 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:56.209 | 99.00th=[42206], 99.50th=[42730], 99.90th=[46400], 99.95th=[46400], 00:43:56.209 | 99.99th=[46400] 00:43:56.209 bw ( KiB/s): min= 384, max= 416, per=49.78%, avg=388.80, stdev=11.72, samples=20 00:43:56.209 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:43:56.209 lat (msec) : 50=100.00% 00:43:56.209 cpu : usr=94.51%, sys=4.99%, ctx=19, majf=0, minf=1634 00:43:56.209 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:56.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:56.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:56.210 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:56.210 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:56.210 filename1: (groupid=0, jobs=1): err= 0: pid=3210319: Mon Nov 18 12:12:20 2024 00:43:56.210 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10017msec) 00:43:56.210 slat (nsec): min=4944, max=66071, avg=13443.69, stdev=4782.20 00:43:56.210 clat (usec): min=40885, max=46674, avg=41010.69, stdev=404.58 00:43:56.210 lat (usec): min=40899, max=46713, avg=41024.13, stdev=405.33 00:43:56.210 clat percentiles (usec): 00:43:56.210 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:43:56.210 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:56.210 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:56.210 | 99.00th=[42206], 99.50th=[42730], 99.90th=[46924], 99.95th=[46924], 00:43:56.210 | 99.99th=[46924] 00:43:56.210 bw ( KiB/s): min= 384, max= 416, per=49.78%, avg=388.80, stdev=11.72, samples=20 00:43:56.210 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:43:56.210 lat (msec) : 50=100.00% 00:43:56.210 cpu : usr=94.28%, sys=5.22%, ctx=13, majf=0, minf=1636 00:43:56.210 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:56.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:56.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:56.210 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:56.210 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:56.210 00:43:56.210 Run status group 0 (all jobs): 00:43:56.210 READ: bw=779KiB/s (798kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=7808KiB (7995kB), run=10016-10017msec 00:43:56.210 ----------------------------------------------------- 00:43:56.210 Suppressions used: 00:43:56.210 count bytes template 00:43:56.210 2 16 /usr/src/fio/parse.c 00:43:56.210 1 8 libtcmalloc_minimal.so 00:43:56.210 1 904 libcrypto.so 00:43:56.210 ----------------------------------------------------- 00:43:56.210 00:43:56.210 12:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:43:56.210 12:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:43:56.210 12:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:43:56.210 12:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:56.210 12:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:43:56.210 12:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:56.210 12:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:56.210 12:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:56.210 12:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:56.210 12:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:56.210 12:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:56.210 12:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:56.210 12:12:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:56.210 12:12:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:43:56.210 12:12:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:56.210 12:12:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:43:56.210 12:12:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:56.210 12:12:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:56.210 12:12:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:56.210 12:12:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:56.210 12:12:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:56.210 12:12:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:56.210 12:12:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:56.210 12:12:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:56.210 00:43:56.210 real 0m12.609s 00:43:56.210 user 0m21.304s 00:43:56.210 sys 0m1.487s 00:43:56.210 12:12:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:56.210 12:12:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:56.210 ************************************ 00:43:56.210 END TEST fio_dif_1_multi_subsystems 00:43:56.210 ************************************ 00:43:56.210 12:12:22 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:43:56.210 12:12:22 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:56.210 12:12:22 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:56.210 12:12:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:56.210 ************************************ 00:43:56.210 START TEST fio_dif_rand_params 00:43:56.210 ************************************ 00:43:56.210 12:12:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:43:56.210 12:12:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:43:56.210 12:12:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:43:56.210 12:12:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:43:56.210 12:12:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:43:56.210 12:12:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:43:56.210 12:12:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:43:56.210 12:12:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:43:56.210 12:12:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:43:56.210 12:12:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:56.210 12:12:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:56.210 12:12:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:56.210 12:12:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:56.210 12:12:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:43:56.210 12:12:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:56.210 12:12:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:56.210 bdev_null0 00:43:56.210 12:12:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:56.210 12:12:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:56.210 12:12:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:56.210 12:12:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:56.210 12:12:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:56.210 12:12:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:56.210 12:12:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:56.210 12:12:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:56.529 [2024-11-18 12:12:22.100813] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:56.529 { 00:43:56.529 "params": { 00:43:56.529 "name": "Nvme$subsystem", 00:43:56.529 "trtype": "$TEST_TRANSPORT", 00:43:56.529 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:56.529 "adrfam": "ipv4", 00:43:56.529 "trsvcid": "$NVMF_PORT", 00:43:56.529 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:56.529 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:56.529 "hdgst": ${hdgst:-false}, 00:43:56.529 "ddgst": ${ddgst:-false} 00:43:56.529 }, 00:43:56.529 "method": "bdev_nvme_attach_controller" 00:43:56.529 } 00:43:56.529 EOF 00:43:56.529 )") 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:56.529 "params": { 00:43:56.529 "name": "Nvme0", 00:43:56.529 "trtype": "tcp", 00:43:56.529 "traddr": "10.0.0.2", 00:43:56.529 "adrfam": "ipv4", 00:43:56.529 "trsvcid": "4420", 00:43:56.529 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:56.529 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:56.529 "hdgst": false, 00:43:56.529 "ddgst": false 00:43:56.529 }, 00:43:56.529 "method": "bdev_nvme_attach_controller" 00:43:56.529 }' 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:56.529 12:12:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:56.529 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:43:56.529 ... 00:43:56.529 fio-3.35 00:43:56.529 Starting 3 threads 00:44:03.089 00:44:03.089 filename0: (groupid=0, jobs=1): err= 0: pid=3211842: Mon Nov 18 12:12:28 2024 00:44:03.089 read: IOPS=189, BW=23.7MiB/s (24.9MB/s)(119MiB/5007msec) 00:44:03.089 slat (nsec): min=5522, max=42238, avg=18757.80, stdev=3738.12 00:44:03.089 clat (usec): min=6069, max=64305, avg=15767.81, stdev=3674.01 00:44:03.089 lat (usec): min=6081, max=64320, avg=15786.57, stdev=3673.65 00:44:03.089 clat percentiles (usec): 00:44:03.089 | 1.00th=[ 8848], 5.00th=[13304], 10.00th=[13960], 20.00th=[14615], 00:44:03.089 | 30.00th=[15008], 40.00th=[15401], 50.00th=[15664], 60.00th=[16057], 00:44:03.089 | 70.00th=[16319], 80.00th=[16712], 90.00th=[17433], 95.00th=[17695], 00:44:03.089 | 99.00th=[19268], 99.50th=[46924], 99.90th=[64226], 99.95th=[64226], 00:44:03.089 | 99.99th=[64226] 00:44:03.089 bw ( KiB/s): min=21760, max=25856, per=33.95%, avg=24268.80, stdev=1091.47, samples=10 00:44:03.089 iops : min= 170, max= 202, avg=189.60, stdev= 8.53, samples=10 00:44:03.089 lat (msec) : 10=1.68%, 20=97.58%, 50=0.42%, 100=0.32% 00:44:03.089 cpu : usr=92.47%, sys=6.95%, ctx=9, majf=0, minf=1634 00:44:03.089 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:03.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:03.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:03.089 issued rwts: total=951,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:03.089 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:03.089 filename0: (groupid=0, jobs=1): err= 0: pid=3211843: Mon Nov 18 12:12:28 2024 00:44:03.089 read: IOPS=196, BW=24.5MiB/s (25.7MB/s)(123MiB/5008msec) 00:44:03.089 slat (nsec): min=5214, max=44644, avg=19032.38, stdev=4022.02 00:44:03.089 clat (usec): min=6109, max=57733, avg=15257.87, stdev=3809.74 00:44:03.089 lat (usec): min=6128, max=57767, avg=15276.90, stdev=3809.82 00:44:03.089 clat percentiles (usec): 00:44:03.089 | 1.00th=[ 9503], 5.00th=[12911], 10.00th=[13435], 20.00th=[13960], 00:44:03.089 | 30.00th=[14353], 40.00th=[14746], 50.00th=[15008], 60.00th=[15401], 00:44:03.089 | 70.00th=[15664], 80.00th=[16057], 90.00th=[16450], 95.00th=[17171], 00:44:03.089 | 99.00th=[19006], 99.50th=[50070], 99.90th=[57934], 99.95th=[57934], 00:44:03.089 | 99.99th=[57934] 00:44:03.089 bw ( KiB/s): min=22060, max=26880, per=35.10%, avg=25092.40, stdev=1697.90, samples=10 00:44:03.089 iops : min= 172, max= 210, avg=196.00, stdev=13.33, samples=10 00:44:03.089 lat (msec) : 10=1.02%, 20=98.07%, 50=0.51%, 100=0.41% 00:44:03.089 cpu : usr=92.27%, sys=7.11%, ctx=7, majf=0, minf=1634 00:44:03.089 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:03.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:03.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:03.089 issued rwts: total=983,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:03.089 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:03.089 filename0: (groupid=0, jobs=1): err= 0: pid=3211844: Mon Nov 18 12:12:28 2024 00:44:03.089 read: IOPS=172, BW=21.5MiB/s (22.6MB/s)(108MiB/5008msec) 00:44:03.089 slat (nsec): min=6388, max=45550, avg=19322.20, stdev=4437.01 00:44:03.089 clat (usec): min=8715, max=54945, avg=17382.79, stdev=4175.74 00:44:03.089 lat (usec): min=8726, max=54963, avg=17402.11, stdev=4175.66 00:44:03.089 clat percentiles (usec): 00:44:03.089 | 1.00th=[10552], 5.00th=[13435], 10.00th=[14222], 20.00th=[15008], 00:44:03.089 | 30.00th=[15664], 40.00th=[16188], 50.00th=[16909], 60.00th=[17957], 00:44:03.089 | 70.00th=[18744], 80.00th=[19268], 90.00th=[20317], 95.00th=[20841], 00:44:03.089 | 99.00th=[46924], 99.50th=[50594], 99.90th=[54789], 99.95th=[54789], 00:44:03.089 | 99.99th=[54789] 00:44:03.089 bw ( KiB/s): min=19456, max=24576, per=30.80%, avg=22016.00, stdev=1502.45, samples=10 00:44:03.089 iops : min= 152, max= 192, avg=172.00, stdev=11.74, samples=10 00:44:03.089 lat (msec) : 10=0.81%, 20=86.91%, 50=11.59%, 100=0.70% 00:44:03.089 cpu : usr=92.77%, sys=6.65%, ctx=11, majf=0, minf=1634 00:44:03.089 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:03.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:03.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:03.089 issued rwts: total=863,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:03.089 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:03.089 00:44:03.089 Run status group 0 (all jobs): 00:44:03.089 READ: bw=69.8MiB/s (73.2MB/s), 21.5MiB/s-24.5MiB/s (22.6MB/s-25.7MB/s), io=350MiB (367MB), run=5007-5008msec 00:44:03.655 ----------------------------------------------------- 00:44:03.655 Suppressions used: 00:44:03.655 count bytes template 00:44:03.655 5 44 /usr/src/fio/parse.c 00:44:03.655 1 8 libtcmalloc_minimal.so 00:44:03.655 1 904 libcrypto.so 00:44:03.655 ----------------------------------------------------- 00:44:03.655 00:44:03.655 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:44:03.655 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:44:03.655 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:03.655 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:03.655 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:44:03.655 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:03.655 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:03.655 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:03.655 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:03.655 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:03.655 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:03.655 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:03.655 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:03.655 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:44:03.655 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:44:03.655 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:44:03.655 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:44:03.655 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:44:03.655 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:44:03.655 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:44:03.655 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:44:03.655 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:03.655 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:44:03.655 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:44:03.655 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:44:03.655 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:03.655 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:03.655 bdev_null0 00:44:03.655 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:03.656 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:03.656 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:03.656 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:03.656 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:03.656 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:03.656 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:03.656 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:03.656 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:03.656 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:03.656 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:03.656 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:03.656 [2024-11-18 12:12:29.524443] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:03.656 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:03.656 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:03.656 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:44:03.656 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:44:03.656 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:44:03.656 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:03.656 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:03.656 bdev_null1 00:44:03.656 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:03.656 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:44:03.656 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:03.656 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:03.915 bdev_null2 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:03.915 { 00:44:03.915 "params": { 00:44:03.915 "name": "Nvme$subsystem", 00:44:03.915 "trtype": "$TEST_TRANSPORT", 00:44:03.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:03.915 "adrfam": "ipv4", 00:44:03.915 "trsvcid": "$NVMF_PORT", 00:44:03.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:03.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:03.915 "hdgst": ${hdgst:-false}, 00:44:03.915 "ddgst": ${ddgst:-false} 00:44:03.915 }, 00:44:03.915 "method": "bdev_nvme_attach_controller" 00:44:03.915 } 00:44:03.915 EOF 00:44:03.915 )") 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:03.915 { 00:44:03.915 "params": { 00:44:03.915 "name": "Nvme$subsystem", 00:44:03.915 "trtype": "$TEST_TRANSPORT", 00:44:03.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:03.915 "adrfam": "ipv4", 00:44:03.915 "trsvcid": "$NVMF_PORT", 00:44:03.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:03.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:03.915 "hdgst": ${hdgst:-false}, 00:44:03.915 "ddgst": ${ddgst:-false} 00:44:03.915 }, 00:44:03.915 "method": "bdev_nvme_attach_controller" 00:44:03.915 } 00:44:03.915 EOF 00:44:03.915 )") 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:03.915 { 00:44:03.915 "params": { 00:44:03.915 "name": "Nvme$subsystem", 00:44:03.915 "trtype": "$TEST_TRANSPORT", 00:44:03.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:03.915 "adrfam": "ipv4", 00:44:03.915 "trsvcid": "$NVMF_PORT", 00:44:03.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:03.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:03.915 "hdgst": ${hdgst:-false}, 00:44:03.915 "ddgst": ${ddgst:-false} 00:44:03.915 }, 00:44:03.915 "method": "bdev_nvme_attach_controller" 00:44:03.915 } 00:44:03.915 EOF 00:44:03.915 )") 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:44:03.915 12:12:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:03.915 "params": { 00:44:03.915 "name": "Nvme0", 00:44:03.915 "trtype": "tcp", 00:44:03.916 "traddr": "10.0.0.2", 00:44:03.916 "adrfam": "ipv4", 00:44:03.916 "trsvcid": "4420", 00:44:03.916 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:03.916 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:03.916 "hdgst": false, 00:44:03.916 "ddgst": false 00:44:03.916 }, 00:44:03.916 "method": "bdev_nvme_attach_controller" 00:44:03.916 },{ 00:44:03.916 "params": { 00:44:03.916 "name": "Nvme1", 00:44:03.916 "trtype": "tcp", 00:44:03.916 "traddr": "10.0.0.2", 00:44:03.916 "adrfam": "ipv4", 00:44:03.916 "trsvcid": "4420", 00:44:03.916 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:03.916 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:03.916 "hdgst": false, 00:44:03.916 "ddgst": false 00:44:03.916 }, 00:44:03.916 "method": "bdev_nvme_attach_controller" 00:44:03.916 },{ 00:44:03.916 "params": { 00:44:03.916 "name": "Nvme2", 00:44:03.916 "trtype": "tcp", 00:44:03.916 "traddr": "10.0.0.2", 00:44:03.916 "adrfam": "ipv4", 00:44:03.916 "trsvcid": "4420", 00:44:03.916 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:44:03.916 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:44:03.916 "hdgst": false, 00:44:03.916 "ddgst": false 00:44:03.916 }, 00:44:03.916 "method": "bdev_nvme_attach_controller" 00:44:03.916 }' 00:44:03.916 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:44:03.916 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:44:03.916 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:44:03.916 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:03.916 12:12:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:04.174 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:44:04.174 ... 00:44:04.174 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:44:04.174 ... 00:44:04.174 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:44:04.174 ... 00:44:04.174 fio-3.35 00:44:04.174 Starting 24 threads 00:44:16.378 00:44:16.378 filename0: (groupid=0, jobs=1): err= 0: pid=3212829: Mon Nov 18 12:12:41 2024 00:44:16.378 read: IOPS=324, BW=1297KiB/s (1328kB/s)(12.7MiB/10020msec) 00:44:16.378 slat (nsec): min=13104, max=79031, avg=36341.18, stdev=10776.42 00:44:16.378 clat (msec): min=28, max=106, avg=49.04, stdev= 8.75 00:44:16.378 lat (msec): min=28, max=106, avg=49.08, stdev= 8.75 00:44:16.378 clat percentiles (msec): 00:44:16.378 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:44:16.378 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:44:16.378 | 70.00th=[ 47], 80.00th=[ 56], 90.00th=[ 65], 95.00th=[ 65], 00:44:16.378 | 99.00th=[ 66], 99.50th=[ 67], 99.90th=[ 107], 99.95th=[ 107], 00:44:16.378 | 99.99th=[ 107] 00:44:16.378 bw ( KiB/s): min= 896, max= 1536, per=4.10%, avg=1286.84, stdev=192.98, samples=19 00:44:16.378 iops : min= 224, max= 384, avg=321.68, stdev=48.26, samples=19 00:44:16.378 lat (msec) : 50=79.19%, 100=20.32%, 250=0.49% 00:44:16.378 cpu : usr=95.69%, sys=2.63%, ctx=235, majf=0, minf=1636 00:44:16.378 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:44:16.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.378 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.378 issued rwts: total=3248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:16.378 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:16.378 filename0: (groupid=0, jobs=1): err= 0: pid=3212830: Mon Nov 18 12:12:41 2024 00:44:16.378 read: IOPS=345, BW=1383KiB/s (1416kB/s)(13.5MiB/10012msec) 00:44:16.378 slat (nsec): min=11647, max=92233, avg=32403.01, stdev=12758.07 00:44:16.378 clat (msec): min=19, max=155, avg=45.99, stdev=12.20 00:44:16.378 lat (msec): min=19, max=155, avg=46.02, stdev=12.20 00:44:16.378 clat percentiles (msec): 00:44:16.378 | 1.00th=[ 29], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 37], 00:44:16.378 | 30.00th=[ 44], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:44:16.378 | 70.00th=[ 46], 80.00th=[ 55], 90.00th=[ 65], 95.00th=[ 65], 00:44:16.378 | 99.00th=[ 68], 99.50th=[ 75], 99.90th=[ 129], 99.95th=[ 157], 00:44:16.378 | 99.99th=[ 157] 00:44:16.378 bw ( KiB/s): min= 1024, max= 1792, per=4.39%, avg=1376.68, stdev=254.83, samples=19 00:44:16.378 iops : min= 256, max= 448, avg=344.16, stdev=63.71, samples=19 00:44:16.378 lat (msec) : 20=0.06%, 50=78.54%, 100=20.94%, 250=0.46% 00:44:16.378 cpu : usr=98.09%, sys=1.34%, ctx=31, majf=0, minf=1633 00:44:16.378 IO depths : 1=3.6%, 2=7.8%, 4=18.6%, 8=60.9%, 16=9.2%, 32=0.0%, >=64=0.0% 00:44:16.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.378 complete : 0=0.0%, 4=92.3%, 8=2.2%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.378 issued rwts: total=3462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:16.378 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:16.378 filename0: (groupid=0, jobs=1): err= 0: pid=3212831: Mon Nov 18 12:12:41 2024 00:44:16.378 read: IOPS=324, BW=1297KiB/s (1328kB/s)(12.7MiB/10020msec) 00:44:16.379 slat (nsec): min=8620, max=75879, avg=36019.93, stdev=11338.56 00:44:16.379 clat (msec): min=25, max=114, avg=49.05, stdev= 9.03 00:44:16.379 lat (msec): min=25, max=114, avg=49.08, stdev= 9.03 00:44:16.379 clat percentiles (msec): 00:44:16.379 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:44:16.379 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:44:16.379 | 70.00th=[ 47], 80.00th=[ 51], 90.00th=[ 65], 95.00th=[ 65], 00:44:16.379 | 99.00th=[ 67], 99.50th=[ 88], 99.90th=[ 107], 99.95th=[ 115], 00:44:16.379 | 99.99th=[ 115] 00:44:16.379 bw ( KiB/s): min= 896, max= 1536, per=4.10%, avg=1286.74, stdev=193.06, samples=19 00:44:16.379 iops : min= 224, max= 384, avg=321.68, stdev=48.26, samples=19 00:44:16.379 lat (msec) : 50=79.83%, 100=19.67%, 250=0.49% 00:44:16.379 cpu : usr=98.36%, sys=1.13%, ctx=18, majf=0, minf=1634 00:44:16.379 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:44:16.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.379 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.379 issued rwts: total=3248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:16.379 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:16.379 filename0: (groupid=0, jobs=1): err= 0: pid=3212832: Mon Nov 18 12:12:41 2024 00:44:16.379 read: IOPS=324, BW=1297KiB/s (1329kB/s)(12.7MiB/10014msec) 00:44:16.379 slat (usec): min=10, max=118, avg=40.33, stdev=11.55 00:44:16.379 clat (msec): min=27, max=124, avg=48.97, stdev= 8.82 00:44:16.379 lat (msec): min=27, max=124, avg=49.01, stdev= 8.82 00:44:16.379 clat percentiles (msec): 00:44:16.379 | 1.00th=[ 42], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:44:16.379 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:44:16.379 | 70.00th=[ 47], 80.00th=[ 57], 90.00th=[ 65], 95.00th=[ 65], 00:44:16.379 | 99.00th=[ 66], 99.50th=[ 66], 99.90th=[ 106], 99.95th=[ 125], 00:44:16.379 | 99.99th=[ 125] 00:44:16.379 bw ( KiB/s): min= 896, max= 1408, per=4.10%, avg=1286.74, stdev=178.35, samples=19 00:44:16.379 iops : min= 224, max= 352, avg=321.68, stdev=44.59, samples=19 00:44:16.379 lat (msec) : 50=79.68%, 100=19.83%, 250=0.49% 00:44:16.379 cpu : usr=98.14%, sys=1.35%, ctx=26, majf=0, minf=1633 00:44:16.379 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:16.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.379 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.379 issued rwts: total=3248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:16.379 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:16.379 filename0: (groupid=0, jobs=1): err= 0: pid=3212833: Mon Nov 18 12:12:41 2024 00:44:16.379 read: IOPS=324, BW=1298KiB/s (1329kB/s)(12.7MiB/10013msec) 00:44:16.379 slat (nsec): min=14614, max=92320, avg=45729.89, stdev=13471.73 00:44:16.379 clat (msec): min=16, max=129, avg=48.98, stdev= 9.32 00:44:16.379 lat (msec): min=16, max=129, avg=49.02, stdev= 9.32 00:44:16.379 clat percentiles (msec): 00:44:16.379 | 1.00th=[ 37], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:44:16.379 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:44:16.379 | 70.00th=[ 47], 80.00th=[ 55], 90.00th=[ 65], 95.00th=[ 65], 00:44:16.379 | 99.00th=[ 66], 99.50th=[ 67], 99.90th=[ 120], 99.95th=[ 129], 00:44:16.379 | 99.99th=[ 130] 00:44:16.379 bw ( KiB/s): min= 1008, max= 1536, per=4.10%, avg=1285.89, stdev=180.49, samples=19 00:44:16.379 iops : min= 252, max= 384, avg=321.47, stdev=45.12, samples=19 00:44:16.379 lat (msec) : 20=0.06%, 50=78.57%, 100=20.87%, 250=0.49% 00:44:16.379 cpu : usr=98.30%, sys=1.13%, ctx=17, majf=0, minf=1633 00:44:16.379 IO depths : 1=0.8%, 2=7.1%, 4=25.0%, 8=55.4%, 16=11.7%, 32=0.0%, >=64=0.0% 00:44:16.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.379 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.379 issued rwts: total=3248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:16.379 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:16.379 filename0: (groupid=0, jobs=1): err= 0: pid=3212834: Mon Nov 18 12:12:41 2024 00:44:16.379 read: IOPS=325, BW=1303KiB/s (1335kB/s)(12.8MiB/10018msec) 00:44:16.379 slat (nsec): min=12909, max=93266, avg=42323.30, stdev=12067.24 00:44:16.379 clat (usec): min=33073, max=87869, avg=48724.15, stdev=8089.17 00:44:16.379 lat (usec): min=33117, max=87886, avg=48766.48, stdev=8087.89 00:44:16.379 clat percentiles (usec): 00:44:16.379 | 1.00th=[42206], 5.00th=[43779], 10.00th=[43779], 20.00th=[44303], 00:44:16.379 | 30.00th=[44303], 40.00th=[44827], 50.00th=[44827], 60.00th=[45351], 00:44:16.379 | 70.00th=[45876], 80.00th=[50070], 90.00th=[64226], 95.00th=[64750], 00:44:16.379 | 99.00th=[65274], 99.50th=[86508], 99.90th=[87557], 99.95th=[87557], 00:44:16.379 | 99.99th=[87557] 00:44:16.379 bw ( KiB/s): min= 896, max= 1536, per=4.13%, avg=1293.47, stdev=190.31, samples=19 00:44:16.379 iops : min= 224, max= 384, avg=323.37, stdev=47.58, samples=19 00:44:16.379 lat (msec) : 50=79.96%, 100=20.04% 00:44:16.379 cpu : usr=97.43%, sys=1.72%, ctx=88, majf=0, minf=1631 00:44:16.379 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:44:16.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.379 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.379 issued rwts: total=3264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:16.379 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:16.379 filename0: (groupid=0, jobs=1): err= 0: pid=3212835: Mon Nov 18 12:12:41 2024 00:44:16.379 read: IOPS=325, BW=1302KiB/s (1333kB/s)(12.7MiB/10011msec) 00:44:16.379 slat (nsec): min=9757, max=80718, avg=35126.93, stdev=11878.23 00:44:16.379 clat (msec): min=24, max=155, avg=48.89, stdev=10.14 00:44:16.379 lat (msec): min=24, max=155, avg=48.92, stdev=10.13 00:44:16.379 clat percentiles (msec): 00:44:16.379 | 1.00th=[ 32], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:44:16.379 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:44:16.379 | 70.00th=[ 47], 80.00th=[ 55], 90.00th=[ 65], 95.00th=[ 65], 00:44:16.379 | 99.00th=[ 68], 99.50th=[ 79], 99.90th=[ 129], 99.95th=[ 155], 00:44:16.379 | 99.99th=[ 155] 00:44:16.379 bw ( KiB/s): min= 912, max= 1520, per=4.12%, avg=1291.05, stdev=203.78, samples=19 00:44:16.379 iops : min= 228, max= 380, avg=322.74, stdev=50.98, samples=19 00:44:16.379 lat (msec) : 50=78.51%, 100=20.99%, 250=0.49% 00:44:16.379 cpu : usr=96.27%, sys=2.16%, ctx=307, majf=0, minf=1633 00:44:16.379 IO depths : 1=0.4%, 2=6.4%, 4=24.2%, 8=56.8%, 16=12.1%, 32=0.0%, >=64=0.0% 00:44:16.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.379 complete : 0=0.0%, 4=94.2%, 8=0.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.379 issued rwts: total=3258,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:16.379 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:16.379 filename0: (groupid=0, jobs=1): err= 0: pid=3212836: Mon Nov 18 12:12:41 2024 00:44:16.379 read: IOPS=324, BW=1298KiB/s (1329kB/s)(12.7MiB/10012msec) 00:44:16.379 slat (nsec): min=12086, max=74908, avg=37043.54, stdev=10939.69 00:44:16.379 clat (usec): min=30365, max=97240, avg=48969.51, stdev=8765.34 00:44:16.379 lat (usec): min=30383, max=97280, avg=49006.55, stdev=8764.55 00:44:16.379 clat percentiles (usec): 00:44:16.379 | 1.00th=[41681], 5.00th=[43779], 10.00th=[43779], 20.00th=[44303], 00:44:16.379 | 30.00th=[44303], 40.00th=[44827], 50.00th=[44827], 60.00th=[45351], 00:44:16.379 | 70.00th=[46400], 80.00th=[50594], 90.00th=[64226], 95.00th=[64750], 00:44:16.379 | 99.00th=[86508], 99.50th=[87557], 99.90th=[96994], 99.95th=[96994], 00:44:16.379 | 99.99th=[96994] 00:44:16.379 bw ( KiB/s): min= 896, max= 1536, per=4.10%, avg=1286.74, stdev=193.06, samples=19 00:44:16.379 iops : min= 224, max= 384, avg=321.68, stdev=48.26, samples=19 00:44:16.379 lat (msec) : 50=79.74%, 100=20.26% 00:44:16.379 cpu : usr=97.56%, sys=1.65%, ctx=41, majf=0, minf=1631 00:44:16.379 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:44:16.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.379 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.379 issued rwts: total=3248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:16.379 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:16.379 filename1: (groupid=0, jobs=1): err= 0: pid=3212837: Mon Nov 18 12:12:41 2024 00:44:16.379 read: IOPS=324, BW=1298KiB/s (1329kB/s)(12.7MiB/10011msec) 00:44:16.379 slat (nsec): min=12493, max=84998, avg=35451.25, stdev=9626.26 00:44:16.379 clat (msec): min=24, max=137, avg=48.99, stdev= 9.30 00:44:16.379 lat (msec): min=24, max=137, avg=49.03, stdev= 9.29 00:44:16.379 clat percentiles (msec): 00:44:16.379 | 1.00th=[ 38], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:44:16.379 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:44:16.379 | 70.00th=[ 47], 80.00th=[ 55], 90.00th=[ 65], 95.00th=[ 65], 00:44:16.379 | 99.00th=[ 66], 99.50th=[ 66], 99.90th=[ 120], 99.95th=[ 138], 00:44:16.379 | 99.99th=[ 138] 00:44:16.379 bw ( KiB/s): min= 1024, max= 1536, per=4.10%, avg=1286.84, stdev=178.98, samples=19 00:44:16.379 iops : min= 256, max= 384, avg=321.68, stdev=44.77, samples=19 00:44:16.379 lat (msec) : 50=78.97%, 100=20.54%, 250=0.49% 00:44:16.379 cpu : usr=97.56%, sys=1.56%, ctx=119, majf=0, minf=1631 00:44:16.379 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:44:16.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.379 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.379 issued rwts: total=3248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:16.379 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:16.379 filename1: (groupid=0, jobs=1): err= 0: pid=3212838: Mon Nov 18 12:12:41 2024 00:44:16.379 read: IOPS=325, BW=1303KiB/s (1334kB/s)(12.8MiB/10023msec) 00:44:16.379 slat (nsec): min=10337, max=77025, avg=38635.96, stdev=10347.86 00:44:16.379 clat (usec): min=30359, max=88280, avg=48773.50, stdev=8359.02 00:44:16.379 lat (usec): min=30376, max=88306, avg=48812.14, stdev=8358.25 00:44:16.379 clat percentiles (usec): 00:44:16.379 | 1.00th=[34341], 5.00th=[43779], 10.00th=[43779], 20.00th=[44303], 00:44:16.379 | 30.00th=[44303], 40.00th=[44827], 50.00th=[44827], 60.00th=[45351], 00:44:16.379 | 70.00th=[45876], 80.00th=[50070], 90.00th=[64226], 95.00th=[64750], 00:44:16.379 | 99.00th=[74974], 99.50th=[86508], 99.90th=[87557], 99.95th=[88605], 00:44:16.379 | 99.99th=[88605] 00:44:16.379 bw ( KiB/s): min= 896, max= 1408, per=4.13%, avg=1293.47, stdev=175.37, samples=19 00:44:16.380 iops : min= 224, max= 352, avg=323.37, stdev=43.84, samples=19 00:44:16.380 lat (msec) : 50=79.84%, 100=20.16% 00:44:16.380 cpu : usr=97.18%, sys=1.82%, ctx=118, majf=0, minf=1631 00:44:16.380 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:44:16.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.380 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.380 issued rwts: total=3264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:16.380 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:16.380 filename1: (groupid=0, jobs=1): err= 0: pid=3212839: Mon Nov 18 12:12:41 2024 00:44:16.380 read: IOPS=325, BW=1301KiB/s (1332kB/s)(12.7MiB/10033msec) 00:44:16.380 slat (nsec): min=7067, max=73253, avg=29737.67, stdev=11196.81 00:44:16.380 clat (usec): min=27478, max=78803, avg=48949.16, stdev=8026.54 00:44:16.380 lat (usec): min=27525, max=78834, avg=48978.90, stdev=8025.10 00:44:16.380 clat percentiles (usec): 00:44:16.380 | 1.00th=[41157], 5.00th=[43779], 10.00th=[44303], 20.00th=[44303], 00:44:16.380 | 30.00th=[44303], 40.00th=[44827], 50.00th=[45351], 60.00th=[45351], 00:44:16.380 | 70.00th=[46400], 80.00th=[56361], 90.00th=[64226], 95.00th=[64750], 00:44:16.380 | 99.00th=[65274], 99.50th=[65799], 99.90th=[79168], 99.95th=[79168], 00:44:16.380 | 99.99th=[79168] 00:44:16.380 bw ( KiB/s): min= 896, max= 1536, per=4.13%, avg=1293.47, stdev=194.52, samples=19 00:44:16.380 iops : min= 224, max= 384, avg=323.37, stdev=48.63, samples=19 00:44:16.380 lat (msec) : 50=79.64%, 100=20.36% 00:44:16.380 cpu : usr=98.37%, sys=1.13%, ctx=20, majf=0, minf=1633 00:44:16.380 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:44:16.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.380 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.380 issued rwts: total=3262,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:16.380 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:16.380 filename1: (groupid=0, jobs=1): err= 0: pid=3212840: Mon Nov 18 12:12:41 2024 00:44:16.380 read: IOPS=333, BW=1336KiB/s (1368kB/s)(13.1MiB/10012msec) 00:44:16.380 slat (nsec): min=8294, max=96464, avg=21557.40, stdev=12603.50 00:44:16.380 clat (usec): min=6201, max=65602, avg=47701.50, stdev=10117.45 00:44:16.380 lat (usec): min=6228, max=65626, avg=47723.06, stdev=10115.20 00:44:16.380 clat percentiles (usec): 00:44:16.380 | 1.00th=[ 7832], 5.00th=[43779], 10.00th=[44303], 20.00th=[44303], 00:44:16.380 | 30.00th=[44827], 40.00th=[44827], 50.00th=[45351], 60.00th=[45351], 00:44:16.380 | 70.00th=[45876], 80.00th=[49021], 90.00th=[64226], 95.00th=[64750], 00:44:16.380 | 99.00th=[65274], 99.50th=[65274], 99.90th=[65799], 99.95th=[65799], 00:44:16.380 | 99.99th=[65799] 00:44:16.380 bw ( KiB/s): min= 896, max= 1920, per=4.24%, avg=1327.16, stdev=245.88, samples=19 00:44:16.380 iops : min= 224, max= 480, avg=331.79, stdev=61.47, samples=19 00:44:16.380 lat (msec) : 10=1.44%, 20=1.44%, 50=77.45%, 100=19.68% 00:44:16.380 cpu : usr=98.20%, sys=1.29%, ctx=18, majf=0, minf=1632 00:44:16.380 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:16.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.380 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.380 issued rwts: total=3344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:16.380 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:16.380 filename1: (groupid=0, jobs=1): err= 0: pid=3212841: Mon Nov 18 12:12:41 2024 00:44:16.380 read: IOPS=325, BW=1302KiB/s (1333kB/s)(12.8MiB/10027msec) 00:44:16.380 slat (usec): min=8, max=106, avg=43.51, stdev=13.15 00:44:16.380 clat (usec): min=25761, max=88101, avg=48768.22, stdev=7929.96 00:44:16.380 lat (usec): min=25790, max=88132, avg=48811.73, stdev=7927.23 00:44:16.380 clat percentiles (usec): 00:44:16.380 | 1.00th=[42730], 5.00th=[43779], 10.00th=[43779], 20.00th=[44303], 00:44:16.380 | 30.00th=[44303], 40.00th=[44827], 50.00th=[44827], 60.00th=[45351], 00:44:16.380 | 70.00th=[46400], 80.00th=[54264], 90.00th=[64226], 95.00th=[64750], 00:44:16.380 | 99.00th=[65274], 99.50th=[66323], 99.90th=[87557], 99.95th=[87557], 00:44:16.380 | 99.99th=[88605] 00:44:16.380 bw ( KiB/s): min= 896, max= 1536, per=4.13%, avg=1293.47, stdev=190.31, samples=19 00:44:16.380 iops : min= 224, max= 384, avg=323.37, stdev=47.58, samples=19 00:44:16.380 lat (msec) : 50=79.23%, 100=20.77% 00:44:16.380 cpu : usr=98.15%, sys=1.30%, ctx=28, majf=0, minf=1634 00:44:16.380 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:44:16.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.380 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.380 issued rwts: total=3264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:16.380 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:16.380 filename1: (groupid=0, jobs=1): err= 0: pid=3212842: Mon Nov 18 12:12:41 2024 00:44:16.380 read: IOPS=332, BW=1328KiB/s (1360kB/s)(13.0MiB/10024msec) 00:44:16.380 slat (nsec): min=9764, max=94385, avg=44303.12, stdev=12423.88 00:44:16.380 clat (usec): min=8014, max=79988, avg=47777.15, stdev=9521.53 00:44:16.380 lat (usec): min=8034, max=80018, avg=47821.46, stdev=9522.64 00:44:16.380 clat percentiles (usec): 00:44:16.380 | 1.00th=[10814], 5.00th=[43779], 10.00th=[43779], 20.00th=[44303], 00:44:16.380 | 30.00th=[44303], 40.00th=[44827], 50.00th=[44827], 60.00th=[45351], 00:44:16.380 | 70.00th=[45876], 80.00th=[51119], 90.00th=[64226], 95.00th=[64226], 00:44:16.380 | 99.00th=[64750], 99.50th=[65274], 99.90th=[65274], 99.95th=[80217], 00:44:16.380 | 99.99th=[80217] 00:44:16.380 bw ( KiB/s): min= 896, max= 1788, per=4.21%, avg=1320.21, stdev=221.45, samples=19 00:44:16.380 iops : min= 224, max= 447, avg=330.05, stdev=55.36, samples=19 00:44:16.380 lat (msec) : 10=0.96%, 20=1.44%, 50=77.01%, 100=20.58% 00:44:16.380 cpu : usr=98.13%, sys=1.31%, ctx=15, majf=0, minf=1634 00:44:16.380 IO depths : 1=6.0%, 2=12.2%, 4=24.8%, 8=50.5%, 16=6.5%, 32=0.0%, >=64=0.0% 00:44:16.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.380 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.380 issued rwts: total=3328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:16.380 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:16.380 filename1: (groupid=0, jobs=1): err= 0: pid=3212843: Mon Nov 18 12:12:41 2024 00:44:16.380 read: IOPS=324, BW=1298KiB/s (1329kB/s)(12.7MiB/10012msec) 00:44:16.380 slat (nsec): min=14363, max=65872, avg=33701.06, stdev=8160.69 00:44:16.380 clat (msec): min=24, max=119, avg=49.02, stdev= 9.26 00:44:16.380 lat (msec): min=24, max=119, avg=49.06, stdev= 9.26 00:44:16.380 clat percentiles (msec): 00:44:16.380 | 1.00th=[ 37], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:44:16.380 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 46], 00:44:16.380 | 70.00th=[ 47], 80.00th=[ 55], 90.00th=[ 65], 95.00th=[ 65], 00:44:16.380 | 99.00th=[ 66], 99.50th=[ 82], 99.90th=[ 120], 99.95th=[ 120], 00:44:16.380 | 99.99th=[ 120] 00:44:16.380 bw ( KiB/s): min= 1024, max= 1536, per=4.10%, avg=1286.74, stdev=179.07, samples=19 00:44:16.380 iops : min= 256, max= 384, avg=321.68, stdev=44.77, samples=19 00:44:16.380 lat (msec) : 50=78.66%, 100=20.84%, 250=0.49% 00:44:16.380 cpu : usr=96.04%, sys=2.24%, ctx=232, majf=0, minf=1631 00:44:16.380 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:44:16.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.380 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.380 issued rwts: total=3248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:16.380 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:16.380 filename1: (groupid=0, jobs=1): err= 0: pid=3212844: Mon Nov 18 12:12:41 2024 00:44:16.380 read: IOPS=324, BW=1297KiB/s (1329kB/s)(12.7MiB/10014msec) 00:44:16.380 slat (nsec): min=13090, max=97788, avg=38053.05, stdev=10816.10 00:44:16.380 clat (msec): min=27, max=124, avg=48.99, stdev= 8.83 00:44:16.380 lat (msec): min=27, max=124, avg=49.02, stdev= 8.82 00:44:16.380 clat percentiles (msec): 00:44:16.381 | 1.00th=[ 42], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:44:16.381 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:44:16.381 | 70.00th=[ 47], 80.00th=[ 57], 90.00th=[ 65], 95.00th=[ 65], 00:44:16.381 | 99.00th=[ 66], 99.50th=[ 87], 99.90th=[ 106], 99.95th=[ 125], 00:44:16.381 | 99.99th=[ 125] 00:44:16.381 bw ( KiB/s): min= 896, max= 1408, per=4.10%, avg=1286.74, stdev=178.35, samples=19 00:44:16.381 iops : min= 224, max= 352, avg=321.68, stdev=44.59, samples=19 00:44:16.381 lat (msec) : 50=79.86%, 100=19.64%, 250=0.49% 00:44:16.381 cpu : usr=98.10%, sys=1.40%, ctx=29, majf=0, minf=1633 00:44:16.381 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:16.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.381 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.381 issued rwts: total=3248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:16.381 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:16.381 filename2: (groupid=0, jobs=1): err= 0: pid=3212845: Mon Nov 18 12:12:41 2024 00:44:16.381 read: IOPS=324, BW=1298KiB/s (1329kB/s)(12.7MiB/10012msec) 00:44:16.381 slat (usec): min=11, max=102, avg=33.95, stdev=11.14 00:44:16.381 clat (msec): min=24, max=129, avg=49.01, stdev= 9.34 00:44:16.381 lat (msec): min=24, max=129, avg=49.05, stdev= 9.33 00:44:16.381 clat percentiles (msec): 00:44:16.381 | 1.00th=[ 37], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:44:16.381 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:44:16.381 | 70.00th=[ 47], 80.00th=[ 55], 90.00th=[ 65], 95.00th=[ 65], 00:44:16.381 | 99.00th=[ 66], 99.50th=[ 82], 99.90th=[ 121], 99.95th=[ 130], 00:44:16.381 | 99.99th=[ 130] 00:44:16.381 bw ( KiB/s): min= 1024, max= 1536, per=4.10%, avg=1286.74, stdev=178.43, samples=19 00:44:16.381 iops : min= 256, max= 384, avg=321.68, stdev=44.61, samples=19 00:44:16.381 lat (msec) : 50=78.85%, 100=20.66%, 250=0.49% 00:44:16.381 cpu : usr=98.31%, sys=1.19%, ctx=12, majf=0, minf=1635 00:44:16.381 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:44:16.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.381 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.381 issued rwts: total=3248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:16.381 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:16.381 filename2: (groupid=0, jobs=1): err= 0: pid=3212846: Mon Nov 18 12:12:41 2024 00:44:16.381 read: IOPS=325, BW=1302KiB/s (1334kB/s)(12.8MiB/10024msec) 00:44:16.381 slat (nsec): min=11306, max=73801, avg=35100.30, stdev=9813.59 00:44:16.381 clat (usec): min=30159, max=87142, avg=48835.14, stdev=7753.41 00:44:16.381 lat (usec): min=30200, max=87178, avg=48870.24, stdev=7753.41 00:44:16.381 clat percentiles (usec): 00:44:16.381 | 1.00th=[42730], 5.00th=[43779], 10.00th=[44303], 20.00th=[44303], 00:44:16.381 | 30.00th=[44303], 40.00th=[44827], 50.00th=[45351], 60.00th=[45351], 00:44:16.381 | 70.00th=[46400], 80.00th=[55313], 90.00th=[64226], 95.00th=[64750], 00:44:16.381 | 99.00th=[65274], 99.50th=[66323], 99.90th=[66847], 99.95th=[87557], 00:44:16.381 | 99.99th=[87557] 00:44:16.381 bw ( KiB/s): min= 896, max= 1536, per=4.13%, avg=1293.47, stdev=190.31, samples=19 00:44:16.381 iops : min= 224, max= 384, avg=323.37, stdev=47.58, samples=19 00:44:16.381 lat (msec) : 50=78.98%, 100=21.02% 00:44:16.381 cpu : usr=98.34%, sys=1.17%, ctx=19, majf=0, minf=1632 00:44:16.381 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:16.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.381 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.381 issued rwts: total=3264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:16.381 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:16.381 filename2: (groupid=0, jobs=1): err= 0: pid=3212847: Mon Nov 18 12:12:41 2024 00:44:16.381 read: IOPS=325, BW=1303KiB/s (1334kB/s)(12.8MiB/10023msec) 00:44:16.381 slat (nsec): min=8639, max=81378, avg=37625.64, stdev=9577.79 00:44:16.381 clat (usec): min=30483, max=88219, avg=48785.97, stdev=8357.88 00:44:16.381 lat (usec): min=30499, max=88243, avg=48823.60, stdev=8357.18 00:44:16.381 clat percentiles (usec): 00:44:16.381 | 1.00th=[34341], 5.00th=[43779], 10.00th=[43779], 20.00th=[44303], 00:44:16.381 | 30.00th=[44303], 40.00th=[44827], 50.00th=[44827], 60.00th=[45351], 00:44:16.381 | 70.00th=[46400], 80.00th=[50070], 90.00th=[64226], 95.00th=[64750], 00:44:16.381 | 99.00th=[74974], 99.50th=[86508], 99.90th=[87557], 99.95th=[88605], 00:44:16.381 | 99.99th=[88605] 00:44:16.381 bw ( KiB/s): min= 896, max= 1424, per=4.13%, avg=1293.47, stdev=177.39, samples=19 00:44:16.381 iops : min= 224, max= 356, avg=323.37, stdev=44.35, samples=19 00:44:16.381 lat (msec) : 50=79.96%, 100=20.04% 00:44:16.381 cpu : usr=98.41%, sys=1.10%, ctx=14, majf=0, minf=1635 00:44:16.381 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:44:16.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.381 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.381 issued rwts: total=3264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:16.381 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:16.381 filename2: (groupid=0, jobs=1): err= 0: pid=3212848: Mon Nov 18 12:12:41 2024 00:44:16.381 read: IOPS=325, BW=1302KiB/s (1333kB/s)(12.8MiB/10027msec) 00:44:16.381 slat (nsec): min=12474, max=72376, avg=35046.10, stdev=9321.13 00:44:16.381 clat (usec): min=29891, max=78181, avg=48840.20, stdev=8022.26 00:44:16.381 lat (usec): min=29909, max=78225, avg=48875.24, stdev=8022.45 00:44:16.381 clat percentiles (usec): 00:44:16.381 | 1.00th=[41157], 5.00th=[43779], 10.00th=[43779], 20.00th=[44303], 00:44:16.381 | 30.00th=[44303], 40.00th=[44827], 50.00th=[44827], 60.00th=[45351], 00:44:16.381 | 70.00th=[46400], 80.00th=[56361], 90.00th=[64226], 95.00th=[64750], 00:44:16.381 | 99.00th=[65274], 99.50th=[76022], 99.90th=[77071], 99.95th=[78119], 00:44:16.381 | 99.99th=[78119] 00:44:16.381 bw ( KiB/s): min= 896, max= 1536, per=4.12%, avg=1292.63, stdev=194.54, samples=19 00:44:16.381 iops : min= 224, max= 384, avg=323.16, stdev=48.64, samples=19 00:44:16.381 lat (msec) : 50=79.78%, 100=20.22% 00:44:16.381 cpu : usr=97.14%, sys=1.87%, ctx=92, majf=0, minf=1633 00:44:16.381 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:16.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.381 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.381 issued rwts: total=3264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:16.381 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:16.381 filename2: (groupid=0, jobs=1): err= 0: pid=3212849: Mon Nov 18 12:12:41 2024 00:44:16.381 read: IOPS=324, BW=1296KiB/s (1328kB/s)(12.7MiB/10021msec) 00:44:16.381 slat (nsec): min=11943, max=62509, avg=33661.86, stdev=9191.25 00:44:16.381 clat (msec): min=25, max=109, avg=49.05, stdev= 9.05 00:44:16.381 lat (msec): min=25, max=109, avg=49.08, stdev= 9.05 00:44:16.381 clat percentiles (msec): 00:44:16.381 | 1.00th=[ 33], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:44:16.381 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 46], 00:44:16.381 | 70.00th=[ 47], 80.00th=[ 59], 90.00th=[ 65], 95.00th=[ 65], 00:44:16.381 | 99.00th=[ 66], 99.50th=[ 68], 99.90th=[ 110], 99.95th=[ 110], 00:44:16.381 | 99.99th=[ 110] 00:44:16.381 bw ( KiB/s): min= 896, max= 1424, per=4.10%, avg=1286.74, stdev=178.43, samples=19 00:44:16.381 iops : min= 224, max= 356, avg=321.68, stdev=44.61, samples=19 00:44:16.381 lat (msec) : 50=78.94%, 100=20.57%, 250=0.49% 00:44:16.381 cpu : usr=97.20%, sys=1.69%, ctx=128, majf=0, minf=1633 00:44:16.381 IO depths : 1=5.4%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.1%, 32=0.0%, >=64=0.0% 00:44:16.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.381 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.381 issued rwts: total=3248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:16.381 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:16.381 filename2: (groupid=0, jobs=1): err= 0: pid=3212850: Mon Nov 18 12:12:41 2024 00:44:16.381 read: IOPS=328, BW=1316KiB/s (1347kB/s)(12.9MiB/10020msec) 00:44:16.381 slat (nsec): min=7859, max=97809, avg=20925.38, stdev=14177.38 00:44:16.381 clat (usec): min=8310, max=65476, avg=48431.89, stdev=8576.65 00:44:16.381 lat (usec): min=8335, max=65505, avg=48452.81, stdev=8575.01 00:44:16.381 clat percentiles (usec): 00:44:16.381 | 1.00th=[28443], 5.00th=[43779], 10.00th=[44303], 20.00th=[44303], 00:44:16.381 | 30.00th=[44827], 40.00th=[44827], 50.00th=[45351], 60.00th=[45351], 00:44:16.381 | 70.00th=[46400], 80.00th=[53740], 90.00th=[64226], 95.00th=[64750], 00:44:16.381 | 99.00th=[65274], 99.50th=[65274], 99.90th=[65274], 99.95th=[65274], 00:44:16.381 | 99.99th=[65274] 00:44:16.381 bw ( KiB/s): min= 896, max= 1536, per=4.17%, avg=1306.95, stdev=198.20, samples=19 00:44:16.381 iops : min= 224, max= 384, avg=326.74, stdev=49.55, samples=19 00:44:16.381 lat (msec) : 10=0.49%, 20=0.49%, 50=77.85%, 100=21.18% 00:44:16.381 cpu : usr=98.19%, sys=1.30%, ctx=13, majf=0, minf=1634 00:44:16.381 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:16.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.381 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.381 issued rwts: total=3296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:16.381 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:16.381 filename2: (groupid=0, jobs=1): err= 0: pid=3212851: Mon Nov 18 12:12:41 2024 00:44:16.381 read: IOPS=324, BW=1297KiB/s (1328kB/s)(12.7MiB/10016msec) 00:44:16.381 slat (nsec): min=11742, max=97980, avg=34144.58, stdev=16641.26 00:44:16.381 clat (msec): min=24, max=123, avg=49.05, stdev= 9.46 00:44:16.381 lat (msec): min=24, max=123, avg=49.09, stdev= 9.45 00:44:16.381 clat percentiles (msec): 00:44:16.381 | 1.00th=[ 37], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:44:16.381 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:44:16.381 | 70.00th=[ 47], 80.00th=[ 55], 90.00th=[ 65], 95.00th=[ 65], 00:44:16.381 | 99.00th=[ 66], 99.50th=[ 66], 99.90th=[ 124], 99.95th=[ 124], 00:44:16.382 | 99.99th=[ 124] 00:44:16.382 bw ( KiB/s): min= 896, max= 1536, per=4.10%, avg=1286.74, stdev=197.72, samples=19 00:44:16.382 iops : min= 224, max= 384, avg=321.68, stdev=49.43, samples=19 00:44:16.382 lat (msec) : 50=78.54%, 100=20.97%, 250=0.49% 00:44:16.382 cpu : usr=98.08%, sys=1.42%, ctx=13, majf=0, minf=1633 00:44:16.382 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:44:16.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.382 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.382 issued rwts: total=3248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:16.382 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:16.382 filename2: (groupid=0, jobs=1): err= 0: pid=3212852: Mon Nov 18 12:12:41 2024 00:44:16.382 read: IOPS=332, BW=1330KiB/s (1362kB/s)(13.0MiB/10008msec) 00:44:16.382 slat (nsec): min=8093, max=97974, avg=31853.52, stdev=16935.39 00:44:16.382 clat (usec): min=6059, max=65436, avg=47838.51, stdev=9563.14 00:44:16.382 lat (usec): min=6085, max=65489, avg=47870.36, stdev=9558.50 00:44:16.382 clat percentiles (usec): 00:44:16.382 | 1.00th=[10290], 5.00th=[43254], 10.00th=[43779], 20.00th=[44303], 00:44:16.382 | 30.00th=[44303], 40.00th=[44827], 50.00th=[44827], 60.00th=[45351], 00:44:16.382 | 70.00th=[45876], 80.00th=[49021], 90.00th=[64226], 95.00th=[64750], 00:44:16.382 | 99.00th=[65274], 99.50th=[65274], 99.90th=[65274], 99.95th=[65274], 00:44:16.382 | 99.99th=[65274] 00:44:16.382 bw ( KiB/s): min= 896, max= 1792, per=4.21%, avg=1320.42, stdev=217.78, samples=19 00:44:16.382 iops : min= 224, max= 448, avg=330.11, stdev=54.44, samples=19 00:44:16.382 lat (msec) : 10=0.96%, 20=1.44%, 50=78.22%, 100=19.38% 00:44:16.382 cpu : usr=98.15%, sys=1.33%, ctx=17, majf=0, minf=1634 00:44:16.382 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:16.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.382 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.382 issued rwts: total=3328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:16.382 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:16.382 00:44:16.382 Run status group 0 (all jobs): 00:44:16.382 READ: bw=30.6MiB/s (32.1MB/s), 1296KiB/s-1383KiB/s (1328kB/s-1416kB/s), io=307MiB (322MB), run=10008-10033msec 00:44:16.640 ----------------------------------------------------- 00:44:16.640 Suppressions used: 00:44:16.640 count bytes template 00:44:16.640 45 402 /usr/src/fio/parse.c 00:44:16.640 1 8 libtcmalloc_minimal.so 00:44:16.640 1 904 libcrypto.so 00:44:16.640 ----------------------------------------------------- 00:44:16.640 00:44:16.899 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:44:16.899 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:44:16.899 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:16.899 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:16.899 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:44:16.899 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:16.899 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:16.899 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:16.899 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:16.899 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:16.899 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:16.899 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:16.899 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:16.899 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:16.899 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:44:16.899 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:44:16.899 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:16.899 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:16.899 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:16.900 bdev_null0 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:16.900 [2024-11-18 12:12:42.623332] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:16.900 bdev_null1 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:16.900 { 00:44:16.900 "params": { 00:44:16.900 "name": "Nvme$subsystem", 00:44:16.900 "trtype": "$TEST_TRANSPORT", 00:44:16.900 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:16.900 "adrfam": "ipv4", 00:44:16.900 "trsvcid": "$NVMF_PORT", 00:44:16.900 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:16.900 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:16.900 "hdgst": ${hdgst:-false}, 00:44:16.900 "ddgst": ${ddgst:-false} 00:44:16.900 }, 00:44:16.900 "method": "bdev_nvme_attach_controller" 00:44:16.900 } 00:44:16.900 EOF 00:44:16.900 )") 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:16.900 { 00:44:16.900 "params": { 00:44:16.900 "name": "Nvme$subsystem", 00:44:16.900 "trtype": "$TEST_TRANSPORT", 00:44:16.900 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:16.900 "adrfam": "ipv4", 00:44:16.900 "trsvcid": "$NVMF_PORT", 00:44:16.900 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:16.900 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:16.900 "hdgst": ${hdgst:-false}, 00:44:16.900 "ddgst": ${ddgst:-false} 00:44:16.900 }, 00:44:16.900 "method": "bdev_nvme_attach_controller" 00:44:16.900 } 00:44:16.900 EOF 00:44:16.900 )") 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:44:16.900 12:12:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:16.900 "params": { 00:44:16.900 "name": "Nvme0", 00:44:16.900 "trtype": "tcp", 00:44:16.900 "traddr": "10.0.0.2", 00:44:16.900 "adrfam": "ipv4", 00:44:16.900 "trsvcid": "4420", 00:44:16.900 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:16.901 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:16.901 "hdgst": false, 00:44:16.901 "ddgst": false 00:44:16.901 }, 00:44:16.901 "method": "bdev_nvme_attach_controller" 00:44:16.901 },{ 00:44:16.901 "params": { 00:44:16.901 "name": "Nvme1", 00:44:16.901 "trtype": "tcp", 00:44:16.901 "traddr": "10.0.0.2", 00:44:16.901 "adrfam": "ipv4", 00:44:16.901 "trsvcid": "4420", 00:44:16.901 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:16.901 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:16.901 "hdgst": false, 00:44:16.901 "ddgst": false 00:44:16.901 }, 00:44:16.901 "method": "bdev_nvme_attach_controller" 00:44:16.901 }' 00:44:16.901 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:44:16.901 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:44:16.901 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:44:16.901 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:16.901 12:12:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:17.159 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:44:17.159 ... 00:44:17.159 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:44:17.159 ... 00:44:17.159 fio-3.35 00:44:17.159 Starting 4 threads 00:44:23.715 00:44:23.715 filename0: (groupid=0, jobs=1): err= 0: pid=3214342: Mon Nov 18 12:12:49 2024 00:44:23.715 read: IOPS=1495, BW=11.7MiB/s (12.2MB/s)(58.4MiB/5002msec) 00:44:23.715 slat (nsec): min=4784, max=41958, avg=17525.90, stdev=3629.10 00:44:23.715 clat (usec): min=981, max=14477, avg=5281.57, stdev=722.98 00:44:23.715 lat (usec): min=1000, max=14492, avg=5299.10, stdev=722.89 00:44:23.715 clat percentiles (usec): 00:44:23.715 | 1.00th=[ 2057], 5.00th=[ 5014], 10.00th=[ 5080], 20.00th=[ 5145], 00:44:23.715 | 30.00th=[ 5211], 40.00th=[ 5211], 50.00th=[ 5276], 60.00th=[ 5276], 00:44:23.715 | 70.00th=[ 5342], 80.00th=[ 5342], 90.00th=[ 5407], 95.00th=[ 5538], 00:44:23.715 | 99.00th=[ 8717], 99.50th=[ 9241], 99.90th=[14353], 99.95th=[14484], 00:44:23.715 | 99.99th=[14484] 00:44:23.715 bw ( KiB/s): min=11760, max=12064, per=24.96%, avg=11971.56, stdev=97.91, samples=9 00:44:23.715 iops : min= 1470, max= 1508, avg=1496.44, stdev=12.24, samples=9 00:44:23.715 lat (usec) : 1000=0.07% 00:44:23.715 lat (msec) : 2=0.86%, 4=0.60%, 10=98.37%, 20=0.11% 00:44:23.715 cpu : usr=94.48%, sys=4.94%, ctx=15, majf=0, minf=1636 00:44:23.715 IO depths : 1=1.1%, 2=20.4%, 4=54.1%, 8=24.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:23.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:23.715 complete : 0=0.0%, 4=90.3%, 8=9.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:23.715 issued rwts: total=7478,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:23.715 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:23.715 filename0: (groupid=0, jobs=1): err= 0: pid=3214343: Mon Nov 18 12:12:49 2024 00:44:23.715 read: IOPS=1501, BW=11.7MiB/s (12.3MB/s)(58.7MiB/5003msec) 00:44:23.715 slat (nsec): min=5563, max=55603, avg=17672.58, stdev=3882.54 00:44:23.715 clat (usec): min=1237, max=9489, avg=5256.08, stdev=336.54 00:44:23.715 lat (usec): min=1256, max=9509, avg=5273.75, stdev=336.61 00:44:23.715 clat percentiles (usec): 00:44:23.715 | 1.00th=[ 4621], 5.00th=[ 5014], 10.00th=[ 5080], 20.00th=[ 5145], 00:44:23.715 | 30.00th=[ 5211], 40.00th=[ 5211], 50.00th=[ 5276], 60.00th=[ 5276], 00:44:23.715 | 70.00th=[ 5342], 80.00th=[ 5342], 90.00th=[ 5407], 95.00th=[ 5473], 00:44:23.715 | 99.00th=[ 6128], 99.50th=[ 6718], 99.90th=[ 8717], 99.95th=[ 9241], 00:44:23.715 | 99.99th=[ 9503] 00:44:23.715 bw ( KiB/s): min=11904, max=12160, per=25.04%, avg=12008.70, stdev=98.61, samples=10 00:44:23.715 iops : min= 1488, max= 1520, avg=1501.00, stdev=12.41, samples=10 00:44:23.715 lat (msec) : 2=0.20%, 4=0.19%, 10=99.61% 00:44:23.715 cpu : usr=94.48%, sys=4.96%, ctx=7, majf=0, minf=1634 00:44:23.715 IO depths : 1=2.7%, 2=21.3%, 4=53.2%, 8=22.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:23.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:23.715 complete : 0=0.0%, 4=90.2%, 8=9.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:23.715 issued rwts: total=7512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:23.715 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:23.715 filename1: (groupid=0, jobs=1): err= 0: pid=3214344: Mon Nov 18 12:12:49 2024 00:44:23.715 read: IOPS=1500, BW=11.7MiB/s (12.3MB/s)(58.6MiB/5001msec) 00:44:23.715 slat (nsec): min=4796, max=49558, avg=18711.21, stdev=4542.46 00:44:23.715 clat (usec): min=1030, max=15240, avg=5253.16, stdev=595.37 00:44:23.715 lat (usec): min=1051, max=15256, avg=5271.87, stdev=595.27 00:44:23.715 clat percentiles (usec): 00:44:23.715 | 1.00th=[ 2671], 5.00th=[ 5014], 10.00th=[ 5080], 20.00th=[ 5145], 00:44:23.715 | 30.00th=[ 5211], 40.00th=[ 5211], 50.00th=[ 5276], 60.00th=[ 5276], 00:44:23.715 | 70.00th=[ 5276], 80.00th=[ 5342], 90.00th=[ 5407], 95.00th=[ 5473], 00:44:23.715 | 99.00th=[ 7963], 99.50th=[ 8586], 99.90th=[13566], 99.95th=[13566], 00:44:23.715 | 99.99th=[15270] 00:44:23.715 bw ( KiB/s): min=11783, max=12144, per=25.06%, avg=12020.33, stdev=111.20, samples=9 00:44:23.715 iops : min= 1472, max= 1518, avg=1502.44, stdev=14.13, samples=9 00:44:23.715 lat (msec) : 2=0.59%, 4=0.75%, 10=98.56%, 20=0.11% 00:44:23.715 cpu : usr=93.76%, sys=5.28%, ctx=93, majf=0, minf=1635 00:44:23.715 IO depths : 1=2.1%, 2=23.7%, 4=50.9%, 8=23.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:23.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:23.715 complete : 0=0.0%, 4=90.1%, 8=9.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:23.715 issued rwts: total=7504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:23.715 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:23.715 filename1: (groupid=0, jobs=1): err= 0: pid=3214345: Mon Nov 18 12:12:49 2024 00:44:23.715 read: IOPS=1500, BW=11.7MiB/s (12.3MB/s)(58.6MiB/5004msec) 00:44:23.715 slat (nsec): min=4803, max=50756, avg=18722.30, stdev=4983.92 00:44:23.715 clat (usec): min=1202, max=9433, avg=5267.38, stdev=319.31 00:44:23.715 lat (usec): min=1222, max=9453, avg=5286.10, stdev=319.42 00:44:23.715 clat percentiles (usec): 00:44:23.715 | 1.00th=[ 4555], 5.00th=[ 5014], 10.00th=[ 5080], 20.00th=[ 5145], 00:44:23.715 | 30.00th=[ 5211], 40.00th=[ 5211], 50.00th=[ 5276], 60.00th=[ 5276], 00:44:23.715 | 70.00th=[ 5342], 80.00th=[ 5342], 90.00th=[ 5473], 95.00th=[ 5604], 00:44:23.715 | 99.00th=[ 6128], 99.50th=[ 6652], 99.90th=[ 8455], 99.95th=[ 8717], 00:44:23.715 | 99.99th=[ 9372] 00:44:23.715 bw ( KiB/s): min=11904, max=12160, per=25.01%, avg=11998.40, stdev=82.09, samples=10 00:44:23.715 iops : min= 1488, max= 1520, avg=1499.80, stdev=10.26, samples=10 00:44:23.715 lat (msec) : 2=0.07%, 4=0.43%, 10=99.51% 00:44:23.715 cpu : usr=93.64%, sys=5.38%, ctx=121, majf=0, minf=1634 00:44:23.715 IO depths : 1=0.7%, 2=15.7%, 4=56.9%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:23.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:23.715 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:23.715 issued rwts: total=7507,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:23.715 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:23.715 00:44:23.715 Run status group 0 (all jobs): 00:44:23.715 READ: bw=46.8MiB/s (49.1MB/s), 11.7MiB/s-11.7MiB/s (12.2MB/s-12.3MB/s), io=234MiB (246MB), run=5001-5004msec 00:44:24.650 ----------------------------------------------------- 00:44:24.650 Suppressions used: 00:44:24.650 count bytes template 00:44:24.650 6 52 /usr/src/fio/parse.c 00:44:24.650 1 8 libtcmalloc_minimal.so 00:44:24.650 1 904 libcrypto.so 00:44:24.650 ----------------------------------------------------- 00:44:24.650 00:44:24.650 12:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:44:24.650 12:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:44:24.650 12:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:24.650 12:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:24.650 12:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:44:24.650 12:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:24.650 12:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:24.650 12:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:24.651 12:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:24.651 12:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:24.651 12:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:24.651 12:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:24.651 12:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:24.651 12:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:24.651 12:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:44:24.651 12:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:44:24.651 12:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:24.651 12:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:24.651 12:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:24.651 12:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:24.651 12:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:44:24.651 12:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:24.651 12:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:24.651 12:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:24.651 00:44:24.651 real 0m28.252s 00:44:24.651 user 4m36.398s 00:44:24.651 sys 0m7.218s 00:44:24.651 12:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:24.651 12:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:24.651 ************************************ 00:44:24.651 END TEST fio_dif_rand_params 00:44:24.651 ************************************ 00:44:24.651 12:12:50 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:44:24.651 12:12:50 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:24.651 12:12:50 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:24.651 12:12:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:24.651 ************************************ 00:44:24.651 START TEST fio_dif_digest 00:44:24.651 ************************************ 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:24.651 bdev_null0 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:24.651 [2024-11-18 12:12:50.402585] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:24.651 { 00:44:24.651 "params": { 00:44:24.651 "name": "Nvme$subsystem", 00:44:24.651 "trtype": "$TEST_TRANSPORT", 00:44:24.651 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:24.651 "adrfam": "ipv4", 00:44:24.651 "trsvcid": "$NVMF_PORT", 00:44:24.651 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:24.651 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:24.651 "hdgst": ${hdgst:-false}, 00:44:24.651 "ddgst": ${ddgst:-false} 00:44:24.651 }, 00:44:24.651 "method": "bdev_nvme_attach_controller" 00:44:24.651 } 00:44:24.651 EOF 00:44:24.651 )") 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:24.651 "params": { 00:44:24.651 "name": "Nvme0", 00:44:24.651 "trtype": "tcp", 00:44:24.651 "traddr": "10.0.0.2", 00:44:24.651 "adrfam": "ipv4", 00:44:24.651 "trsvcid": "4420", 00:44:24.651 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:24.651 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:24.651 "hdgst": true, 00:44:24.651 "ddgst": true 00:44:24.651 }, 00:44:24.651 "method": "bdev_nvme_attach_controller" 00:44:24.651 }' 00:44:24.651 12:12:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:44:24.652 12:12:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:44:24.652 12:12:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # break 00:44:24.652 12:12:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:24.652 12:12:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:24.910 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:44:24.910 ... 00:44:24.910 fio-3.35 00:44:24.910 Starting 3 threads 00:44:37.123 00:44:37.123 filename0: (groupid=0, jobs=1): err= 0: pid=3215224: Mon Nov 18 12:13:01 2024 00:44:37.123 read: IOPS=177, BW=22.2MiB/s (23.3MB/s)(223MiB/10047msec) 00:44:37.123 slat (nsec): min=8156, max=61329, avg=21518.22, stdev=3055.09 00:44:37.123 clat (usec): min=13394, max=61499, avg=16845.37, stdev=2016.13 00:44:37.123 lat (usec): min=13415, max=61521, avg=16866.89, stdev=2016.28 00:44:37.123 clat percentiles (usec): 00:44:37.123 | 1.00th=[14222], 5.00th=[14877], 10.00th=[15270], 20.00th=[15795], 00:44:37.123 | 30.00th=[16188], 40.00th=[16450], 50.00th=[16712], 60.00th=[16909], 00:44:37.123 | 70.00th=[17171], 80.00th=[17433], 90.00th=[18220], 95.00th=[19268], 00:44:37.123 | 99.00th=[22414], 99.50th=[23725], 99.90th=[57934], 99.95th=[61604], 00:44:37.123 | 99.99th=[61604] 00:44:37.123 bw ( KiB/s): min=17920, max=23808, per=34.56%, avg=22807.35, stdev=1257.66, samples=20 00:44:37.123 iops : min= 140, max= 186, avg=178.15, stdev= 9.84, samples=20 00:44:37.123 lat (msec) : 20=96.41%, 50=3.48%, 100=0.11% 00:44:37.123 cpu : usr=93.75%, sys=5.64%, ctx=23, majf=0, minf=1637 00:44:37.123 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:37.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:37.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:37.123 issued rwts: total=1784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:37.123 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:37.123 filename0: (groupid=0, jobs=1): err= 0: pid=3215225: Mon Nov 18 12:13:01 2024 00:44:37.123 read: IOPS=169, BW=21.2MiB/s (22.3MB/s)(213MiB/10047msec) 00:44:37.123 slat (nsec): min=5418, max=42548, avg=21700.85, stdev=2601.40 00:44:37.123 clat (usec): min=14106, max=54523, avg=17616.79, stdev=1860.25 00:44:37.123 lat (usec): min=14127, max=54544, avg=17638.49, stdev=1860.44 00:44:37.123 clat percentiles (usec): 00:44:37.123 | 1.00th=[15139], 5.00th=[15926], 10.00th=[16188], 20.00th=[16712], 00:44:37.123 | 30.00th=[16909], 40.00th=[17171], 50.00th=[17433], 60.00th=[17433], 00:44:37.123 | 70.00th=[17957], 80.00th=[18220], 90.00th=[19006], 95.00th=[19792], 00:44:37.123 | 99.00th=[22938], 99.50th=[23987], 99.90th=[49546], 99.95th=[54264], 00:44:37.123 | 99.99th=[54264] 00:44:37.123 bw ( KiB/s): min=17664, max=22784, per=33.05%, avg=21813.35, stdev=1081.06, samples=20 00:44:37.123 iops : min= 138, max= 178, avg=170.40, stdev= 8.45, samples=20 00:44:37.123 lat (msec) : 20=95.13%, 50=4.81%, 100=0.06% 00:44:37.123 cpu : usr=93.97%, sys=5.28%, ctx=117, majf=0, minf=1636 00:44:37.123 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:37.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:37.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:37.123 issued rwts: total=1706,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:37.123 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:37.123 filename0: (groupid=0, jobs=1): err= 0: pid=3215226: Mon Nov 18 12:13:01 2024 00:44:37.123 read: IOPS=168, BW=21.0MiB/s (22.1MB/s)(211MiB/10044msec) 00:44:37.123 slat (nsec): min=4927, max=44874, avg=21746.42, stdev=2515.66 00:44:37.123 clat (usec): min=14643, max=58041, avg=17778.77, stdev=1981.21 00:44:37.123 lat (usec): min=14665, max=58066, avg=17800.52, stdev=1981.45 00:44:37.123 clat percentiles (usec): 00:44:37.123 | 1.00th=[15533], 5.00th=[16188], 10.00th=[16581], 20.00th=[16909], 00:44:37.123 | 30.00th=[17171], 40.00th=[17171], 50.00th=[17433], 60.00th=[17695], 00:44:37.123 | 70.00th=[17957], 80.00th=[18220], 90.00th=[18744], 95.00th=[20055], 00:44:37.123 | 99.00th=[24773], 99.50th=[25297], 99.90th=[54264], 99.95th=[57934], 00:44:37.123 | 99.99th=[57934] 00:44:37.123 bw ( KiB/s): min=16384, max=22272, per=32.74%, avg=21606.40, stdev=1285.11, samples=20 00:44:37.123 iops : min= 128, max= 174, avg=168.80, stdev=10.04, samples=20 00:44:37.123 lat (msec) : 20=94.97%, 50=4.91%, 100=0.12% 00:44:37.123 cpu : usr=94.29%, sys=5.12%, ctx=13, majf=0, minf=1632 00:44:37.123 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:37.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:37.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:37.123 issued rwts: total=1690,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:37.123 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:37.123 00:44:37.123 Run status group 0 (all jobs): 00:44:37.124 READ: bw=64.4MiB/s (67.6MB/s), 21.0MiB/s-22.2MiB/s (22.1MB/s-23.3MB/s), io=648MiB (679MB), run=10044-10047msec 00:44:37.124 ----------------------------------------------------- 00:44:37.124 Suppressions used: 00:44:37.124 count bytes template 00:44:37.124 5 44 /usr/src/fio/parse.c 00:44:37.124 1 8 libtcmalloc_minimal.so 00:44:37.124 1 904 libcrypto.so 00:44:37.124 ----------------------------------------------------- 00:44:37.124 00:44:37.124 12:13:02 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:44:37.124 12:13:02 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:44:37.124 12:13:02 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:44:37.124 12:13:02 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:37.124 12:13:02 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:44:37.124 12:13:02 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:37.124 12:13:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:37.124 12:13:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:37.124 12:13:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:37.124 12:13:02 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:37.124 12:13:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:37.124 12:13:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:37.124 12:13:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:37.124 00:44:37.124 real 0m12.315s 00:44:37.124 user 0m30.387s 00:44:37.124 sys 0m2.086s 00:44:37.124 12:13:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:37.124 12:13:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:37.124 ************************************ 00:44:37.124 END TEST fio_dif_digest 00:44:37.124 ************************************ 00:44:37.124 12:13:02 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:44:37.124 12:13:02 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:44:37.124 12:13:02 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:37.124 12:13:02 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:44:37.124 12:13:02 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:37.124 12:13:02 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:44:37.124 12:13:02 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:37.124 12:13:02 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:37.124 rmmod nvme_tcp 00:44:37.124 rmmod nvme_fabrics 00:44:37.124 rmmod nvme_keyring 00:44:37.124 12:13:02 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:37.124 12:13:02 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:44:37.124 12:13:02 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:44:37.124 12:13:02 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3207698 ']' 00:44:37.124 12:13:02 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3207698 00:44:37.124 12:13:02 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 3207698 ']' 00:44:37.124 12:13:02 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 3207698 00:44:37.124 12:13:02 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:44:37.124 12:13:02 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:37.124 12:13:02 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3207698 00:44:37.124 12:13:02 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:37.124 12:13:02 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:37.124 12:13:02 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3207698' 00:44:37.124 killing process with pid 3207698 00:44:37.124 12:13:02 nvmf_dif -- common/autotest_common.sh@973 -- # kill 3207698 00:44:37.124 12:13:02 nvmf_dif -- common/autotest_common.sh@978 -- # wait 3207698 00:44:38.498 12:13:03 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:44:38.498 12:13:03 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:39.431 Waiting for block devices as requested 00:44:39.431 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:44:39.431 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:39.431 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:39.689 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:39.689 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:39.689 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:39.689 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:39.948 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:39.948 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:39.948 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:39.948 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:40.206 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:40.206 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:40.206 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:40.465 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:40.465 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:40.465 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:40.724 12:13:06 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:40.724 12:13:06 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:40.724 12:13:06 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:44:40.724 12:13:06 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:40.724 12:13:06 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:44:40.724 12:13:06 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:44:40.724 12:13:06 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:40.724 12:13:06 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:40.724 12:13:06 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:40.724 12:13:06 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:40.724 12:13:06 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:42.627 12:13:08 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:42.627 00:44:42.627 real 1m16.526s 00:44:42.627 user 6m46.498s 00:44:42.627 sys 0m18.533s 00:44:42.627 12:13:08 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:42.627 12:13:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:42.627 ************************************ 00:44:42.627 END TEST nvmf_dif 00:44:42.627 ************************************ 00:44:42.627 12:13:08 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:44:42.627 12:13:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:42.627 12:13:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:42.627 12:13:08 -- common/autotest_common.sh@10 -- # set +x 00:44:42.627 ************************************ 00:44:42.627 START TEST nvmf_abort_qd_sizes 00:44:42.627 ************************************ 00:44:42.627 12:13:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:44:42.627 * Looking for test storage... 00:44:42.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:42.627 12:13:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:44:42.627 12:13:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:44:42.627 12:13:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:44:42.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:42.886 --rc genhtml_branch_coverage=1 00:44:42.886 --rc genhtml_function_coverage=1 00:44:42.886 --rc genhtml_legend=1 00:44:42.886 --rc geninfo_all_blocks=1 00:44:42.886 --rc geninfo_unexecuted_blocks=1 00:44:42.886 00:44:42.886 ' 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:44:42.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:42.886 --rc genhtml_branch_coverage=1 00:44:42.886 --rc genhtml_function_coverage=1 00:44:42.886 --rc genhtml_legend=1 00:44:42.886 --rc geninfo_all_blocks=1 00:44:42.886 --rc geninfo_unexecuted_blocks=1 00:44:42.886 00:44:42.886 ' 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:44:42.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:42.886 --rc genhtml_branch_coverage=1 00:44:42.886 --rc genhtml_function_coverage=1 00:44:42.886 --rc genhtml_legend=1 00:44:42.886 --rc geninfo_all_blocks=1 00:44:42.886 --rc geninfo_unexecuted_blocks=1 00:44:42.886 00:44:42.886 ' 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:44:42.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:42.886 --rc genhtml_branch_coverage=1 00:44:42.886 --rc genhtml_function_coverage=1 00:44:42.886 --rc genhtml_legend=1 00:44:42.886 --rc geninfo_all_blocks=1 00:44:42.886 --rc geninfo_unexecuted_blocks=1 00:44:42.886 00:44:42.886 ' 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:42.886 12:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:42.887 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:44:42.887 12:13:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:44:44.807 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:44:44.807 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:44:44.807 Found net devices under 0000:0a:00.0: cvl_0_0 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:44:44.807 Found net devices under 0000:0a:00.1: cvl_0_1 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:44.807 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:45.065 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:45.065 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:45.065 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:45.065 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:45.065 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:45.065 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:45.065 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:45.065 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:45.065 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:45.065 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:45.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:45.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:44:45.065 00:44:45.065 --- 10.0.0.2 ping statistics --- 00:44:45.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:45.065 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:44:45.065 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:45.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:45.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:44:45.065 00:44:45.065 --- 10.0.0.1 ping statistics --- 00:44:45.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:45.065 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:44:45.065 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:45.065 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:44:45.065 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:44:45.065 12:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:44:46.441 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:44:46.441 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:44:46.441 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:44:46.441 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:44:46.441 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:44:46.441 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:44:46.441 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:44:46.441 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:44:46.441 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:44:46.441 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:44:46.441 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:44:46.441 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:44:46.441 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:44:46.441 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:44:46.441 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:44:46.441 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:44:47.378 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:44:47.378 12:13:13 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:47.378 12:13:13 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:47.378 12:13:13 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:47.378 12:13:13 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:47.378 12:13:13 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:47.378 12:13:13 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:47.378 12:13:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:44:47.378 12:13:13 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:44:47.378 12:13:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:47.378 12:13:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:47.378 12:13:13 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3220274 00:44:47.378 12:13:13 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:44:47.378 12:13:13 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3220274 00:44:47.378 12:13:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 3220274 ']' 00:44:47.378 12:13:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:47.378 12:13:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:47.378 12:13:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:47.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:47.378 12:13:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:47.378 12:13:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:47.378 [2024-11-18 12:13:13.190755] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:44:47.378 [2024-11-18 12:13:13.190901] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:47.637 [2024-11-18 12:13:13.343938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:47.637 [2024-11-18 12:13:13.488276] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:47.637 [2024-11-18 12:13:13.488375] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:47.637 [2024-11-18 12:13:13.488403] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:47.637 [2024-11-18 12:13:13.488428] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:47.637 [2024-11-18 12:13:13.488448] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:47.637 [2024-11-18 12:13:13.491334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:47.637 [2024-11-18 12:13:13.491405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:44:47.637 [2024-11-18 12:13:13.491522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:47.637 [2024-11-18 12:13:13.491532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:44:48.571 12:13:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:48.571 12:13:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:44:48.571 12:13:14 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:44:48.571 12:13:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:48.571 12:13:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:48.571 12:13:14 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:48.571 12:13:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:44:48.571 12:13:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:44:48.571 12:13:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:44:48.571 12:13:14 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:44:48.571 12:13:14 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:44:48.572 12:13:14 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]] 00:44:48.572 12:13:14 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:44:48.572 12:13:14 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:44:48.572 12:13:14 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:44:48.572 12:13:14 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:44:48.572 12:13:14 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:44:48.572 12:13:14 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:44:48.572 12:13:14 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:44:48.572 12:13:14 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0 00:44:48.572 12:13:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:44:48.572 12:13:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:44:48.572 12:13:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:44:48.572 12:13:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:48.572 12:13:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:48.572 12:13:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:48.572 ************************************ 00:44:48.572 START TEST spdk_target_abort 00:44:48.572 ************************************ 00:44:48.572 12:13:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:44:48.572 12:13:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:44:48.572 12:13:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:44:48.572 12:13:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:48.572 12:13:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:51.853 spdk_targetn1 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:51.853 [2024-11-18 12:13:17.139485] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:51.853 [2024-11-18 12:13:17.186276] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:51.853 12:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:55.133 Initializing NVMe Controllers 00:44:55.133 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:55.133 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:55.133 Initialization complete. Launching workers. 00:44:55.133 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8760, failed: 0 00:44:55.133 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1205, failed to submit 7555 00:44:55.133 success 687, unsuccessful 518, failed 0 00:44:55.133 12:13:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:55.133 12:13:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:58.413 Initializing NVMe Controllers 00:44:58.413 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:58.413 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:58.413 Initialization complete. Launching workers. 00:44:58.413 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8524, failed: 0 00:44:58.413 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1284, failed to submit 7240 00:44:58.413 success 289, unsuccessful 995, failed 0 00:44:58.413 12:13:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:58.413 12:13:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:01.696 Initializing NVMe Controllers 00:45:01.696 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:45:01.696 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:01.696 Initialization complete. Launching workers. 00:45:01.696 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27478, failed: 0 00:45:01.696 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2686, failed to submit 24792 00:45:01.696 success 197, unsuccessful 2489, failed 0 00:45:01.696 12:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:45:01.696 12:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:01.696 12:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:01.696 12:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:01.696 12:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:45:01.696 12:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:01.696 12:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:03.071 12:13:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:03.071 12:13:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3220274 00:45:03.071 12:13:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 3220274 ']' 00:45:03.071 12:13:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 3220274 00:45:03.071 12:13:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:45:03.071 12:13:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:03.071 12:13:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3220274 00:45:03.071 12:13:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:03.071 12:13:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:03.071 12:13:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3220274' 00:45:03.071 killing process with pid 3220274 00:45:03.071 12:13:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 3220274 00:45:03.071 12:13:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 3220274 00:45:04.006 00:45:04.006 real 0m15.414s 00:45:04.006 user 1m0.191s 00:45:04.006 sys 0m2.935s 00:45:04.006 12:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:04.006 12:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:04.006 ************************************ 00:45:04.006 END TEST spdk_target_abort 00:45:04.006 ************************************ 00:45:04.006 12:13:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:45:04.006 12:13:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:04.006 12:13:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:04.006 12:13:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:04.006 ************************************ 00:45:04.006 START TEST kernel_target_abort 00:45:04.006 ************************************ 00:45:04.006 12:13:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:45:04.006 12:13:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:45:04.006 12:13:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:45:04.006 12:13:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:45:04.006 12:13:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:45:04.006 12:13:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:04.006 12:13:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:04.006 12:13:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:45:04.006 12:13:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:04.006 12:13:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:45:04.006 12:13:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:45:04.006 12:13:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:45:04.006 12:13:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:45:04.006 12:13:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:45:04.006 12:13:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:45:04.006 12:13:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:04.006 12:13:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:45:04.006 12:13:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:45:04.006 12:13:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:45:04.006 12:13:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:45:04.006 12:13:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:45:04.006 12:13:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:45:04.006 12:13:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:45:05.029 Waiting for block devices as requested 00:45:05.029 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:45:05.312 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:45:05.312 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:45:05.312 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:45:05.571 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:45:05.571 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:45:05.571 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:45:05.571 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:45:05.571 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:45:05.830 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:45:05.830 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:45:05.830 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:45:06.089 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:45:06.089 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:45:06.089 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:45:06.089 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:45:06.348 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:45:06.608 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:45:06.608 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:45:06.608 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:45:06.608 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:45:06.608 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:45:06.608 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:45:06.608 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:45:06.608 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:45:06.608 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:45:06.608 No valid GPT data, bailing 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:45:06.867 00:45:06.867 Discovery Log Number of Records 2, Generation counter 2 00:45:06.867 =====Discovery Log Entry 0====== 00:45:06.867 trtype: tcp 00:45:06.867 adrfam: ipv4 00:45:06.867 subtype: current discovery subsystem 00:45:06.867 treq: not specified, sq flow control disable supported 00:45:06.867 portid: 1 00:45:06.867 trsvcid: 4420 00:45:06.867 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:45:06.867 traddr: 10.0.0.1 00:45:06.867 eflags: none 00:45:06.867 sectype: none 00:45:06.867 =====Discovery Log Entry 1====== 00:45:06.867 trtype: tcp 00:45:06.867 adrfam: ipv4 00:45:06.867 subtype: nvme subsystem 00:45:06.867 treq: not specified, sq flow control disable supported 00:45:06.867 portid: 1 00:45:06.867 trsvcid: 4420 00:45:06.867 subnqn: nqn.2016-06.io.spdk:testnqn 00:45:06.867 traddr: 10.0.0.1 00:45:06.867 eflags: none 00:45:06.867 sectype: none 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:06.867 12:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:10.154 Initializing NVMe Controllers 00:45:10.154 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:10.154 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:10.154 Initialization complete. Launching workers. 00:45:10.154 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38014, failed: 0 00:45:10.154 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38014, failed to submit 0 00:45:10.154 success 0, unsuccessful 38014, failed 0 00:45:10.154 12:13:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:10.154 12:13:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:13.438 Initializing NVMe Controllers 00:45:13.438 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:13.438 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:13.438 Initialization complete. Launching workers. 00:45:13.438 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 65919, failed: 0 00:45:13.438 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16630, failed to submit 49289 00:45:13.438 success 0, unsuccessful 16630, failed 0 00:45:13.438 12:13:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:13.438 12:13:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:16.727 Initializing NVMe Controllers 00:45:16.727 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:16.727 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:16.727 Initialization complete. Launching workers. 00:45:16.727 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 62025, failed: 0 00:45:16.727 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15490, failed to submit 46535 00:45:16.727 success 0, unsuccessful 15490, failed 0 00:45:16.727 12:13:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:45:16.727 12:13:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:45:16.727 12:13:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:45:16.727 12:13:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:16.727 12:13:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:45:16.727 12:13:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:45:16.727 12:13:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:16.727 12:13:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:45:16.727 12:13:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:45:16.727 12:13:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:45:17.664 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:45:17.664 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:45:17.664 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:45:17.664 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:45:17.664 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:45:17.664 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:45:17.664 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:45:17.664 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:45:17.664 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:45:17.664 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:45:17.664 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:45:17.664 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:45:17.664 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:45:17.664 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:45:17.664 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:45:17.664 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:45:18.600 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:45:18.858 00:45:18.858 real 0m14.840s 00:45:18.858 user 0m7.305s 00:45:18.858 sys 0m3.404s 00:45:18.858 12:13:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:18.858 12:13:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:18.858 ************************************ 00:45:18.858 END TEST kernel_target_abort 00:45:18.858 ************************************ 00:45:18.858 12:13:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:45:18.859 12:13:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:45:18.859 12:13:44 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:45:18.859 12:13:44 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:45:18.859 12:13:44 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:18.859 12:13:44 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:45:18.859 12:13:44 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:18.859 12:13:44 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:18.859 rmmod nvme_tcp 00:45:18.859 rmmod nvme_fabrics 00:45:18.859 rmmod nvme_keyring 00:45:18.859 12:13:44 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:18.859 12:13:44 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:45:18.859 12:13:44 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:45:18.859 12:13:44 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3220274 ']' 00:45:18.859 12:13:44 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3220274 00:45:18.859 12:13:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 3220274 ']' 00:45:18.859 12:13:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 3220274 00:45:18.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3220274) - No such process 00:45:18.859 12:13:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 3220274 is not found' 00:45:18.859 Process with pid 3220274 is not found 00:45:18.859 12:13:44 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:45:18.859 12:13:44 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:45:19.793 Waiting for block devices as requested 00:45:20.052 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:45:20.052 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:45:20.052 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:45:20.310 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:45:20.310 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:45:20.310 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:45:20.310 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:45:20.572 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:45:20.572 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:45:20.572 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:45:20.572 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:45:20.831 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:45:20.831 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:45:20.831 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:45:20.831 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:45:20.831 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:45:21.091 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:45:21.091 12:13:46 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:45:21.091 12:13:46 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:45:21.091 12:13:46 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:45:21.091 12:13:46 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:45:21.091 12:13:46 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:45:21.091 12:13:46 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:45:21.091 12:13:46 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:21.091 12:13:46 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:21.091 12:13:46 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:21.091 12:13:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:21.091 12:13:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:23.626 12:13:48 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:23.626 00:45:23.626 real 0m40.497s 00:45:23.626 user 1m9.936s 00:45:23.626 sys 0m9.838s 00:45:23.626 12:13:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:23.626 12:13:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:23.626 ************************************ 00:45:23.626 END TEST nvmf_abort_qd_sizes 00:45:23.626 ************************************ 00:45:23.626 12:13:48 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:45:23.626 12:13:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:23.626 12:13:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:23.626 12:13:48 -- common/autotest_common.sh@10 -- # set +x 00:45:23.626 ************************************ 00:45:23.626 START TEST keyring_file 00:45:23.626 ************************************ 00:45:23.626 12:13:48 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:45:23.626 * Looking for test storage... 00:45:23.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:45:23.626 12:13:49 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:45:23.626 12:13:49 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:45:23.626 12:13:49 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:45:23.626 12:13:49 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:45:23.626 12:13:49 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:23.626 12:13:49 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:23.626 12:13:49 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:23.626 12:13:49 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:45:23.627 12:13:49 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:45:23.627 12:13:49 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:45:23.627 12:13:49 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:45:23.627 12:13:49 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:45:23.627 12:13:49 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:45:23.627 12:13:49 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:45:23.627 12:13:49 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:23.627 12:13:49 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:45:23.627 12:13:49 keyring_file -- scripts/common.sh@345 -- # : 1 00:45:23.627 12:13:49 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:23.627 12:13:49 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:23.627 12:13:49 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:45:23.627 12:13:49 keyring_file -- scripts/common.sh@353 -- # local d=1 00:45:23.627 12:13:49 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:23.627 12:13:49 keyring_file -- scripts/common.sh@355 -- # echo 1 00:45:23.627 12:13:49 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:45:23.627 12:13:49 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:45:23.627 12:13:49 keyring_file -- scripts/common.sh@353 -- # local d=2 00:45:23.627 12:13:49 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:23.627 12:13:49 keyring_file -- scripts/common.sh@355 -- # echo 2 00:45:23.627 12:13:49 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:45:23.627 12:13:49 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:23.627 12:13:49 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:23.627 12:13:49 keyring_file -- scripts/common.sh@368 -- # return 0 00:45:23.627 12:13:49 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:23.627 12:13:49 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:45:23.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:23.627 --rc genhtml_branch_coverage=1 00:45:23.627 --rc genhtml_function_coverage=1 00:45:23.627 --rc genhtml_legend=1 00:45:23.627 --rc geninfo_all_blocks=1 00:45:23.627 --rc geninfo_unexecuted_blocks=1 00:45:23.627 00:45:23.627 ' 00:45:23.627 12:13:49 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:45:23.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:23.627 --rc genhtml_branch_coverage=1 00:45:23.627 --rc genhtml_function_coverage=1 00:45:23.627 --rc genhtml_legend=1 00:45:23.627 --rc geninfo_all_blocks=1 00:45:23.627 --rc geninfo_unexecuted_blocks=1 00:45:23.627 00:45:23.627 ' 00:45:23.627 12:13:49 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:45:23.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:23.627 --rc genhtml_branch_coverage=1 00:45:23.627 --rc genhtml_function_coverage=1 00:45:23.627 --rc genhtml_legend=1 00:45:23.627 --rc geninfo_all_blocks=1 00:45:23.627 --rc geninfo_unexecuted_blocks=1 00:45:23.627 00:45:23.627 ' 00:45:23.627 12:13:49 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:45:23.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:23.627 --rc genhtml_branch_coverage=1 00:45:23.627 --rc genhtml_function_coverage=1 00:45:23.627 --rc genhtml_legend=1 00:45:23.627 --rc geninfo_all_blocks=1 00:45:23.627 --rc geninfo_unexecuted_blocks=1 00:45:23.627 00:45:23.627 ' 00:45:23.627 12:13:49 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:45:23.627 12:13:49 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:23.627 12:13:49 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:45:23.627 12:13:49 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:23.627 12:13:49 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:23.627 12:13:49 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:23.627 12:13:49 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:23.627 12:13:49 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:23.627 12:13:49 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:23.627 12:13:49 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:23.627 12:13:49 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:23.627 12:13:49 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:23.627 12:13:49 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:23.627 12:13:49 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:45:23.627 12:13:49 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:45:23.627 12:13:49 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:23.627 12:13:49 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:23.627 12:13:49 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:23.627 12:13:49 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:23.627 12:13:49 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:23.627 12:13:49 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:45:23.627 12:13:49 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:23.627 12:13:49 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:23.627 12:13:49 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:23.627 12:13:49 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:23.627 12:13:49 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:23.627 12:13:49 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:23.627 12:13:49 keyring_file -- paths/export.sh@5 -- # export PATH 00:45:23.627 12:13:49 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:23.627 12:13:49 keyring_file -- nvmf/common.sh@51 -- # : 0 00:45:23.627 12:13:49 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:23.627 12:13:49 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:23.627 12:13:49 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:23.627 12:13:49 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:23.627 12:13:49 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:23.627 12:13:49 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:23.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:23.627 12:13:49 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:23.627 12:13:49 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:23.627 12:13:49 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:23.627 12:13:49 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:45:23.627 12:13:49 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:45:23.627 12:13:49 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:45:23.627 12:13:49 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:45:23.627 12:13:49 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:45:23.627 12:13:49 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:45:23.627 12:13:49 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:45:23.627 12:13:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:45:23.627 12:13:49 keyring_file -- keyring/common.sh@17 -- # name=key0 00:45:23.627 12:13:49 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:45:23.627 12:13:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:45:23.627 12:13:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:45:23.627 12:13:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.uJkoWNvkST 00:45:23.627 12:13:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:45:23.627 12:13:49 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:45:23.627 12:13:49 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:45:23.627 12:13:49 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:23.627 12:13:49 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:45:23.627 12:13:49 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:45:23.627 12:13:49 keyring_file -- nvmf/common.sh@733 -- # python - 00:45:23.627 12:13:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.uJkoWNvkST 00:45:23.627 12:13:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.uJkoWNvkST 00:45:23.627 12:13:49 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.uJkoWNvkST 00:45:23.627 12:13:49 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:45:23.627 12:13:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:45:23.627 12:13:49 keyring_file -- keyring/common.sh@17 -- # name=key1 00:45:23.627 12:13:49 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:45:23.627 12:13:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:45:23.627 12:13:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:45:23.627 12:13:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ZzOtOM1jxm 00:45:23.627 12:13:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:45:23.627 12:13:49 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:45:23.627 12:13:49 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:45:23.627 12:13:49 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:23.628 12:13:49 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:45:23.628 12:13:49 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:45:23.628 12:13:49 keyring_file -- nvmf/common.sh@733 -- # python - 00:45:23.628 12:13:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ZzOtOM1jxm 00:45:23.628 12:13:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ZzOtOM1jxm 00:45:23.628 12:13:49 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.ZzOtOM1jxm 00:45:23.628 12:13:49 keyring_file -- keyring/file.sh@30 -- # tgtpid=3226511 00:45:23.628 12:13:49 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:45:23.628 12:13:49 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3226511 00:45:23.628 12:13:49 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3226511 ']' 00:45:23.628 12:13:49 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:23.628 12:13:49 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:23.628 12:13:49 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:23.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:23.628 12:13:49 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:23.628 12:13:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:23.628 [2024-11-18 12:13:49.351824] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:45:23.628 [2024-11-18 12:13:49.352006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3226511 ] 00:45:23.628 [2024-11-18 12:13:49.498915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:23.887 [2024-11-18 12:13:49.637252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:24.824 12:13:50 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:24.824 12:13:50 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:45:24.824 12:13:50 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:45:24.824 12:13:50 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:24.824 12:13:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:24.824 [2024-11-18 12:13:50.560861] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:24.824 null0 00:45:24.824 [2024-11-18 12:13:50.592871] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:45:24.824 [2024-11-18 12:13:50.593440] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:45:24.824 12:13:50 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:24.824 12:13:50 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:45:24.824 12:13:50 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:45:24.824 12:13:50 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:45:24.824 12:13:50 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:45:24.824 12:13:50 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:24.824 12:13:50 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:45:24.824 12:13:50 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:24.824 12:13:50 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:45:24.824 12:13:50 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:24.824 12:13:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:24.824 [2024-11-18 12:13:50.620934] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:45:24.824 request: 00:45:24.824 { 00:45:24.824 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:45:24.824 "secure_channel": false, 00:45:24.824 "listen_address": { 00:45:24.824 "trtype": "tcp", 00:45:24.824 "traddr": "127.0.0.1", 00:45:24.824 "trsvcid": "4420" 00:45:24.824 }, 00:45:24.824 "method": "nvmf_subsystem_add_listener", 00:45:24.824 "req_id": 1 00:45:24.824 } 00:45:24.824 Got JSON-RPC error response 00:45:24.824 response: 00:45:24.824 { 00:45:24.824 "code": -32602, 00:45:24.824 "message": "Invalid parameters" 00:45:24.825 } 00:45:24.825 12:13:50 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:45:24.825 12:13:50 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:45:24.825 12:13:50 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:24.825 12:13:50 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:24.825 12:13:50 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:24.825 12:13:50 keyring_file -- keyring/file.sh@47 -- # bperfpid=3226650 00:45:24.825 12:13:50 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3226650 /var/tmp/bperf.sock 00:45:24.825 12:13:50 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3226650 ']' 00:45:24.825 12:13:50 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:45:24.825 12:13:50 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:24.825 12:13:50 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:24.825 12:13:50 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:24.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:24.825 12:13:50 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:24.825 12:13:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:24.825 [2024-11-18 12:13:50.708275] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:45:24.825 [2024-11-18 12:13:50.708445] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3226650 ] 00:45:25.084 [2024-11-18 12:13:50.842197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:25.084 [2024-11-18 12:13:50.964975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:26.020 12:13:51 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:26.020 12:13:51 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:45:26.020 12:13:51 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uJkoWNvkST 00:45:26.020 12:13:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uJkoWNvkST 00:45:26.278 12:13:51 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ZzOtOM1jxm 00:45:26.278 12:13:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ZzOtOM1jxm 00:45:26.536 12:13:52 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:45:26.536 12:13:52 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:45:26.536 12:13:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:26.536 12:13:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:26.536 12:13:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:26.796 12:13:52 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.uJkoWNvkST == \/\t\m\p\/\t\m\p\.\u\J\k\o\W\N\v\k\S\T ]] 00:45:26.796 12:13:52 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:45:26.796 12:13:52 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:45:26.796 12:13:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:26.796 12:13:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:26.797 12:13:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:27.055 12:13:52 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.ZzOtOM1jxm == \/\t\m\p\/\t\m\p\.\Z\z\O\t\O\M\1\j\x\m ]] 00:45:27.056 12:13:52 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:45:27.056 12:13:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:27.056 12:13:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:27.056 12:13:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:27.056 12:13:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:27.056 12:13:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:27.340 12:13:53 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:45:27.340 12:13:53 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:45:27.340 12:13:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:27.340 12:13:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:27.340 12:13:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:27.340 12:13:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:27.340 12:13:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:27.598 12:13:53 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:45:27.598 12:13:53 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:27.598 12:13:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:27.856 [2024-11-18 12:13:53.586962] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:27.856 nvme0n1 00:45:27.856 12:13:53 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:45:27.856 12:13:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:27.856 12:13:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:27.856 12:13:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:27.856 12:13:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:27.856 12:13:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:28.422 12:13:54 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:45:28.422 12:13:54 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:45:28.422 12:13:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:28.422 12:13:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:28.422 12:13:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:28.422 12:13:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:28.422 12:13:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:28.422 12:13:54 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:45:28.422 12:13:54 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:45:28.682 Running I/O for 1 seconds... 00:45:29.619 6398.00 IOPS, 24.99 MiB/s 00:45:29.619 Latency(us) 00:45:29.619 [2024-11-18T11:13:55.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:29.619 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:45:29.619 nvme0n1 : 1.01 6448.94 25.19 0.00 0.00 19752.16 8883.77 31263.10 00:45:29.619 [2024-11-18T11:13:55.504Z] =================================================================================================================== 00:45:29.619 [2024-11-18T11:13:55.504Z] Total : 6448.94 25.19 0.00 0.00 19752.16 8883.77 31263.10 00:45:29.619 { 00:45:29.619 "results": [ 00:45:29.619 { 00:45:29.619 "job": "nvme0n1", 00:45:29.619 "core_mask": "0x2", 00:45:29.619 "workload": "randrw", 00:45:29.619 "percentage": 50, 00:45:29.619 "status": "finished", 00:45:29.619 "queue_depth": 128, 00:45:29.619 "io_size": 4096, 00:45:29.619 "runtime": 1.012104, 00:45:29.619 "iops": 6448.9420059598615, 00:45:29.619 "mibps": 25.19117971078071, 00:45:29.619 "io_failed": 0, 00:45:29.619 "io_timeout": 0, 00:45:29.619 "avg_latency_us": 19752.15598431586, 00:45:29.619 "min_latency_us": 8883.76888888889, 00:45:29.619 "max_latency_us": 31263.09925925926 00:45:29.619 } 00:45:29.619 ], 00:45:29.619 "core_count": 1 00:45:29.619 } 00:45:29.619 12:13:55 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:45:29.619 12:13:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:45:29.878 12:13:55 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:45:29.878 12:13:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:29.878 12:13:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:29.878 12:13:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:29.878 12:13:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:29.878 12:13:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:30.136 12:13:55 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:45:30.136 12:13:55 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:45:30.136 12:13:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:30.136 12:13:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:30.136 12:13:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:30.136 12:13:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:30.136 12:13:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:30.394 12:13:56 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:45:30.394 12:13:56 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:30.394 12:13:56 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:45:30.394 12:13:56 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:30.394 12:13:56 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:45:30.394 12:13:56 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:30.394 12:13:56 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:45:30.394 12:13:56 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:30.394 12:13:56 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:30.394 12:13:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:30.653 [2024-11-18 12:13:56.519250] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:45:30.653 [2024-11-18 12:13:56.519770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 (107): Transport endpoint is not connected 00:45:30.653 [2024-11-18 12:13:56.520745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 (9): Bad file descriptor 00:45:30.653 [2024-11-18 12:13:56.521739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:45:30.653 [2024-11-18 12:13:56.521771] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:45:30.653 [2024-11-18 12:13:56.521809] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:45:30.653 [2024-11-18 12:13:56.521837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:45:30.653 request: 00:45:30.653 { 00:45:30.653 "name": "nvme0", 00:45:30.653 "trtype": "tcp", 00:45:30.653 "traddr": "127.0.0.1", 00:45:30.653 "adrfam": "ipv4", 00:45:30.653 "trsvcid": "4420", 00:45:30.653 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:30.653 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:30.653 "prchk_reftag": false, 00:45:30.653 "prchk_guard": false, 00:45:30.653 "hdgst": false, 00:45:30.653 "ddgst": false, 00:45:30.653 "psk": "key1", 00:45:30.653 "allow_unrecognized_csi": false, 00:45:30.653 "method": "bdev_nvme_attach_controller", 00:45:30.653 "req_id": 1 00:45:30.653 } 00:45:30.653 Got JSON-RPC error response 00:45:30.653 response: 00:45:30.653 { 00:45:30.653 "code": -5, 00:45:30.653 "message": "Input/output error" 00:45:30.653 } 00:45:30.914 12:13:56 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:45:30.914 12:13:56 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:30.914 12:13:56 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:30.914 12:13:56 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:30.914 12:13:56 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:45:30.914 12:13:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:30.914 12:13:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:30.914 12:13:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:30.914 12:13:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:30.914 12:13:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:31.173 12:13:56 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:45:31.173 12:13:56 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:45:31.173 12:13:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:31.173 12:13:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:31.173 12:13:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:31.173 12:13:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:31.173 12:13:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:31.431 12:13:57 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:45:31.431 12:13:57 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:45:31.431 12:13:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:45:31.689 12:13:57 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:45:31.689 12:13:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:45:31.947 12:13:57 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:45:31.947 12:13:57 keyring_file -- keyring/file.sh@78 -- # jq length 00:45:31.947 12:13:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:32.205 12:13:57 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:45:32.205 12:13:57 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.uJkoWNvkST 00:45:32.205 12:13:57 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.uJkoWNvkST 00:45:32.205 12:13:57 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:45:32.205 12:13:57 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.uJkoWNvkST 00:45:32.205 12:13:57 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:45:32.205 12:13:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:32.205 12:13:57 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:45:32.205 12:13:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:32.205 12:13:57 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uJkoWNvkST 00:45:32.205 12:13:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uJkoWNvkST 00:45:32.464 [2024-11-18 12:13:58.163674] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.uJkoWNvkST': 0100660 00:45:32.464 [2024-11-18 12:13:58.163730] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:45:32.464 request: 00:45:32.464 { 00:45:32.464 "name": "key0", 00:45:32.464 "path": "/tmp/tmp.uJkoWNvkST", 00:45:32.464 "method": "keyring_file_add_key", 00:45:32.464 "req_id": 1 00:45:32.464 } 00:45:32.464 Got JSON-RPC error response 00:45:32.464 response: 00:45:32.464 { 00:45:32.464 "code": -1, 00:45:32.464 "message": "Operation not permitted" 00:45:32.464 } 00:45:32.464 12:13:58 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:45:32.464 12:13:58 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:32.464 12:13:58 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:32.464 12:13:58 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:32.464 12:13:58 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.uJkoWNvkST 00:45:32.464 12:13:58 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uJkoWNvkST 00:45:32.464 12:13:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uJkoWNvkST 00:45:32.721 12:13:58 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.uJkoWNvkST 00:45:32.721 12:13:58 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:45:32.721 12:13:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:32.721 12:13:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:32.721 12:13:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:32.721 12:13:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:32.721 12:13:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:32.981 12:13:58 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:45:32.981 12:13:58 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:32.981 12:13:58 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:45:32.981 12:13:58 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:32.981 12:13:58 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:45:32.981 12:13:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:32.981 12:13:58 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:45:32.981 12:13:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:32.981 12:13:58 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:32.981 12:13:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:33.259 [2024-11-18 12:13:58.990024] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.uJkoWNvkST': No such file or directory 00:45:33.259 [2024-11-18 12:13:58.990082] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:45:33.259 [2024-11-18 12:13:58.990132] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:45:33.259 [2024-11-18 12:13:58.990156] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:45:33.259 [2024-11-18 12:13:58.990179] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:45:33.259 [2024-11-18 12:13:58.990201] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:45:33.259 request: 00:45:33.259 { 00:45:33.259 "name": "nvme0", 00:45:33.259 "trtype": "tcp", 00:45:33.259 "traddr": "127.0.0.1", 00:45:33.259 "adrfam": "ipv4", 00:45:33.259 "trsvcid": "4420", 00:45:33.259 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:33.259 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:33.259 "prchk_reftag": false, 00:45:33.259 "prchk_guard": false, 00:45:33.259 "hdgst": false, 00:45:33.259 "ddgst": false, 00:45:33.259 "psk": "key0", 00:45:33.259 "allow_unrecognized_csi": false, 00:45:33.259 "method": "bdev_nvme_attach_controller", 00:45:33.259 "req_id": 1 00:45:33.259 } 00:45:33.259 Got JSON-RPC error response 00:45:33.259 response: 00:45:33.259 { 00:45:33.259 "code": -19, 00:45:33.259 "message": "No such device" 00:45:33.259 } 00:45:33.259 12:13:59 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:45:33.259 12:13:59 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:33.259 12:13:59 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:33.259 12:13:59 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:33.259 12:13:59 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:45:33.259 12:13:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:45:33.577 12:13:59 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:45:33.577 12:13:59 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:45:33.577 12:13:59 keyring_file -- keyring/common.sh@17 -- # name=key0 00:45:33.577 12:13:59 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:45:33.577 12:13:59 keyring_file -- keyring/common.sh@17 -- # digest=0 00:45:33.577 12:13:59 keyring_file -- keyring/common.sh@18 -- # mktemp 00:45:33.577 12:13:59 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.OtLP2N3U5g 00:45:33.577 12:13:59 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:45:33.577 12:13:59 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:45:33.577 12:13:59 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:45:33.577 12:13:59 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:33.577 12:13:59 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:45:33.577 12:13:59 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:45:33.577 12:13:59 keyring_file -- nvmf/common.sh@733 -- # python - 00:45:33.577 12:13:59 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.OtLP2N3U5g 00:45:33.577 12:13:59 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.OtLP2N3U5g 00:45:33.577 12:13:59 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.OtLP2N3U5g 00:45:33.577 12:13:59 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.OtLP2N3U5g 00:45:33.577 12:13:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.OtLP2N3U5g 00:45:33.835 12:13:59 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:33.835 12:13:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:34.092 nvme0n1 00:45:34.092 12:13:59 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:45:34.092 12:13:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:34.092 12:13:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:34.092 12:13:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:34.092 12:13:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:34.092 12:13:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:34.351 12:14:00 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:45:34.351 12:14:00 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:45:34.351 12:14:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:45:34.917 12:14:00 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:45:34.917 12:14:00 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:45:34.917 12:14:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:34.917 12:14:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:34.917 12:14:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:34.917 12:14:00 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:45:34.917 12:14:00 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:45:34.917 12:14:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:34.917 12:14:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:34.917 12:14:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:34.917 12:14:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:34.917 12:14:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:35.175 12:14:01 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:45:35.175 12:14:01 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:45:35.175 12:14:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:45:35.740 12:14:01 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:45:35.740 12:14:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:35.740 12:14:01 keyring_file -- keyring/file.sh@105 -- # jq length 00:45:35.740 12:14:01 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:45:35.741 12:14:01 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.OtLP2N3U5g 00:45:35.741 12:14:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.OtLP2N3U5g 00:45:35.998 12:14:01 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ZzOtOM1jxm 00:45:35.998 12:14:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ZzOtOM1jxm 00:45:36.568 12:14:02 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:36.568 12:14:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:36.827 nvme0n1 00:45:36.827 12:14:02 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:45:36.827 12:14:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:45:37.087 12:14:02 keyring_file -- keyring/file.sh@113 -- # config='{ 00:45:37.087 "subsystems": [ 00:45:37.087 { 00:45:37.087 "subsystem": "keyring", 00:45:37.087 "config": [ 00:45:37.087 { 00:45:37.087 "method": "keyring_file_add_key", 00:45:37.087 "params": { 00:45:37.087 "name": "key0", 00:45:37.087 "path": "/tmp/tmp.OtLP2N3U5g" 00:45:37.087 } 00:45:37.087 }, 00:45:37.087 { 00:45:37.087 "method": "keyring_file_add_key", 00:45:37.087 "params": { 00:45:37.087 "name": "key1", 00:45:37.087 "path": "/tmp/tmp.ZzOtOM1jxm" 00:45:37.087 } 00:45:37.087 } 00:45:37.087 ] 00:45:37.087 }, 00:45:37.087 { 00:45:37.087 "subsystem": "iobuf", 00:45:37.087 "config": [ 00:45:37.087 { 00:45:37.087 "method": "iobuf_set_options", 00:45:37.087 "params": { 00:45:37.087 "small_pool_count": 8192, 00:45:37.087 "large_pool_count": 1024, 00:45:37.087 "small_bufsize": 8192, 00:45:37.087 "large_bufsize": 135168, 00:45:37.087 "enable_numa": false 00:45:37.087 } 00:45:37.087 } 00:45:37.087 ] 00:45:37.087 }, 00:45:37.087 { 00:45:37.087 "subsystem": "sock", 00:45:37.087 "config": [ 00:45:37.087 { 00:45:37.087 "method": "sock_set_default_impl", 00:45:37.087 "params": { 00:45:37.087 "impl_name": "posix" 00:45:37.087 } 00:45:37.087 }, 00:45:37.087 { 00:45:37.087 "method": "sock_impl_set_options", 00:45:37.087 "params": { 00:45:37.087 "impl_name": "ssl", 00:45:37.087 "recv_buf_size": 4096, 00:45:37.087 "send_buf_size": 4096, 00:45:37.087 "enable_recv_pipe": true, 00:45:37.087 "enable_quickack": false, 00:45:37.087 "enable_placement_id": 0, 00:45:37.087 "enable_zerocopy_send_server": true, 00:45:37.087 "enable_zerocopy_send_client": false, 00:45:37.087 "zerocopy_threshold": 0, 00:45:37.087 "tls_version": 0, 00:45:37.087 "enable_ktls": false 00:45:37.087 } 00:45:37.087 }, 00:45:37.087 { 00:45:37.087 "method": "sock_impl_set_options", 00:45:37.087 "params": { 00:45:37.087 "impl_name": "posix", 00:45:37.087 "recv_buf_size": 2097152, 00:45:37.087 "send_buf_size": 2097152, 00:45:37.087 "enable_recv_pipe": true, 00:45:37.087 "enable_quickack": false, 00:45:37.087 "enable_placement_id": 0, 00:45:37.087 "enable_zerocopy_send_server": true, 00:45:37.087 "enable_zerocopy_send_client": false, 00:45:37.087 "zerocopy_threshold": 0, 00:45:37.087 "tls_version": 0, 00:45:37.087 "enable_ktls": false 00:45:37.087 } 00:45:37.087 } 00:45:37.087 ] 00:45:37.087 }, 00:45:37.087 { 00:45:37.087 "subsystem": "vmd", 00:45:37.087 "config": [] 00:45:37.087 }, 00:45:37.087 { 00:45:37.087 "subsystem": "accel", 00:45:37.087 "config": [ 00:45:37.087 { 00:45:37.087 "method": "accel_set_options", 00:45:37.087 "params": { 00:45:37.087 "small_cache_size": 128, 00:45:37.087 "large_cache_size": 16, 00:45:37.087 "task_count": 2048, 00:45:37.087 "sequence_count": 2048, 00:45:37.087 "buf_count": 2048 00:45:37.087 } 00:45:37.087 } 00:45:37.087 ] 00:45:37.087 }, 00:45:37.087 { 00:45:37.087 "subsystem": "bdev", 00:45:37.087 "config": [ 00:45:37.087 { 00:45:37.087 "method": "bdev_set_options", 00:45:37.087 "params": { 00:45:37.087 "bdev_io_pool_size": 65535, 00:45:37.087 "bdev_io_cache_size": 256, 00:45:37.087 "bdev_auto_examine": true, 00:45:37.087 "iobuf_small_cache_size": 128, 00:45:37.087 "iobuf_large_cache_size": 16 00:45:37.087 } 00:45:37.087 }, 00:45:37.087 { 00:45:37.087 "method": "bdev_raid_set_options", 00:45:37.087 "params": { 00:45:37.087 "process_window_size_kb": 1024, 00:45:37.087 "process_max_bandwidth_mb_sec": 0 00:45:37.087 } 00:45:37.087 }, 00:45:37.087 { 00:45:37.087 "method": "bdev_iscsi_set_options", 00:45:37.087 "params": { 00:45:37.087 "timeout_sec": 30 00:45:37.087 } 00:45:37.087 }, 00:45:37.087 { 00:45:37.087 "method": "bdev_nvme_set_options", 00:45:37.087 "params": { 00:45:37.087 "action_on_timeout": "none", 00:45:37.087 "timeout_us": 0, 00:45:37.087 "timeout_admin_us": 0, 00:45:37.087 "keep_alive_timeout_ms": 10000, 00:45:37.087 "arbitration_burst": 0, 00:45:37.087 "low_priority_weight": 0, 00:45:37.087 "medium_priority_weight": 0, 00:45:37.087 "high_priority_weight": 0, 00:45:37.087 "nvme_adminq_poll_period_us": 10000, 00:45:37.087 "nvme_ioq_poll_period_us": 0, 00:45:37.087 "io_queue_requests": 512, 00:45:37.087 "delay_cmd_submit": true, 00:45:37.087 "transport_retry_count": 4, 00:45:37.087 "bdev_retry_count": 3, 00:45:37.087 "transport_ack_timeout": 0, 00:45:37.087 "ctrlr_loss_timeout_sec": 0, 00:45:37.087 "reconnect_delay_sec": 0, 00:45:37.087 "fast_io_fail_timeout_sec": 0, 00:45:37.087 "disable_auto_failback": false, 00:45:37.087 "generate_uuids": false, 00:45:37.087 "transport_tos": 0, 00:45:37.087 "nvme_error_stat": false, 00:45:37.087 "rdma_srq_size": 0, 00:45:37.087 "io_path_stat": false, 00:45:37.087 "allow_accel_sequence": false, 00:45:37.088 "rdma_max_cq_size": 0, 00:45:37.088 "rdma_cm_event_timeout_ms": 0, 00:45:37.088 "dhchap_digests": [ 00:45:37.088 "sha256", 00:45:37.088 "sha384", 00:45:37.088 "sha512" 00:45:37.088 ], 00:45:37.088 "dhchap_dhgroups": [ 00:45:37.088 "null", 00:45:37.088 "ffdhe2048", 00:45:37.088 "ffdhe3072", 00:45:37.088 "ffdhe4096", 00:45:37.088 "ffdhe6144", 00:45:37.088 "ffdhe8192" 00:45:37.088 ] 00:45:37.088 } 00:45:37.088 }, 00:45:37.088 { 00:45:37.088 "method": "bdev_nvme_attach_controller", 00:45:37.088 "params": { 00:45:37.088 "name": "nvme0", 00:45:37.088 "trtype": "TCP", 00:45:37.088 "adrfam": "IPv4", 00:45:37.088 "traddr": "127.0.0.1", 00:45:37.088 "trsvcid": "4420", 00:45:37.088 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:37.088 "prchk_reftag": false, 00:45:37.088 "prchk_guard": false, 00:45:37.088 "ctrlr_loss_timeout_sec": 0, 00:45:37.088 "reconnect_delay_sec": 0, 00:45:37.088 "fast_io_fail_timeout_sec": 0, 00:45:37.088 "psk": "key0", 00:45:37.088 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:37.088 "hdgst": false, 00:45:37.088 "ddgst": false, 00:45:37.088 "multipath": "multipath" 00:45:37.088 } 00:45:37.088 }, 00:45:37.088 { 00:45:37.088 "method": "bdev_nvme_set_hotplug", 00:45:37.088 "params": { 00:45:37.088 "period_us": 100000, 00:45:37.088 "enable": false 00:45:37.088 } 00:45:37.088 }, 00:45:37.088 { 00:45:37.088 "method": "bdev_wait_for_examine" 00:45:37.088 } 00:45:37.088 ] 00:45:37.088 }, 00:45:37.088 { 00:45:37.088 "subsystem": "nbd", 00:45:37.088 "config": [] 00:45:37.088 } 00:45:37.088 ] 00:45:37.088 }' 00:45:37.088 12:14:02 keyring_file -- keyring/file.sh@115 -- # killprocess 3226650 00:45:37.088 12:14:02 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3226650 ']' 00:45:37.088 12:14:02 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3226650 00:45:37.088 12:14:02 keyring_file -- common/autotest_common.sh@959 -- # uname 00:45:37.088 12:14:02 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:37.088 12:14:02 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3226650 00:45:37.088 12:14:02 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:45:37.088 12:14:02 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:45:37.088 12:14:02 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3226650' 00:45:37.088 killing process with pid 3226650 00:45:37.088 12:14:02 keyring_file -- common/autotest_common.sh@973 -- # kill 3226650 00:45:37.088 Received shutdown signal, test time was about 1.000000 seconds 00:45:37.088 00:45:37.088 Latency(us) 00:45:37.088 [2024-11-18T11:14:02.973Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:37.088 [2024-11-18T11:14:02.973Z] =================================================================================================================== 00:45:37.088 [2024-11-18T11:14:02.973Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:37.088 12:14:02 keyring_file -- common/autotest_common.sh@978 -- # wait 3226650 00:45:38.028 12:14:03 keyring_file -- keyring/file.sh@118 -- # bperfpid=3228251 00:45:38.028 12:14:03 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3228251 /var/tmp/bperf.sock 00:45:38.028 12:14:03 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3228251 ']' 00:45:38.028 12:14:03 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:38.028 12:14:03 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:45:38.028 12:14:03 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:38.028 12:14:03 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:38.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:38.028 12:14:03 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:45:38.028 "subsystems": [ 00:45:38.028 { 00:45:38.028 "subsystem": "keyring", 00:45:38.028 "config": [ 00:45:38.028 { 00:45:38.028 "method": "keyring_file_add_key", 00:45:38.028 "params": { 00:45:38.028 "name": "key0", 00:45:38.028 "path": "/tmp/tmp.OtLP2N3U5g" 00:45:38.028 } 00:45:38.028 }, 00:45:38.028 { 00:45:38.028 "method": "keyring_file_add_key", 00:45:38.028 "params": { 00:45:38.028 "name": "key1", 00:45:38.028 "path": "/tmp/tmp.ZzOtOM1jxm" 00:45:38.028 } 00:45:38.028 } 00:45:38.028 ] 00:45:38.028 }, 00:45:38.028 { 00:45:38.028 "subsystem": "iobuf", 00:45:38.028 "config": [ 00:45:38.028 { 00:45:38.028 "method": "iobuf_set_options", 00:45:38.028 "params": { 00:45:38.028 "small_pool_count": 8192, 00:45:38.028 "large_pool_count": 1024, 00:45:38.028 "small_bufsize": 8192, 00:45:38.028 "large_bufsize": 135168, 00:45:38.028 "enable_numa": false 00:45:38.028 } 00:45:38.028 } 00:45:38.028 ] 00:45:38.028 }, 00:45:38.028 { 00:45:38.028 "subsystem": "sock", 00:45:38.028 "config": [ 00:45:38.028 { 00:45:38.028 "method": "sock_set_default_impl", 00:45:38.028 "params": { 00:45:38.028 "impl_name": "posix" 00:45:38.028 } 00:45:38.028 }, 00:45:38.028 { 00:45:38.028 "method": "sock_impl_set_options", 00:45:38.028 "params": { 00:45:38.028 "impl_name": "ssl", 00:45:38.028 "recv_buf_size": 4096, 00:45:38.028 "send_buf_size": 4096, 00:45:38.028 "enable_recv_pipe": true, 00:45:38.028 "enable_quickack": false, 00:45:38.028 "enable_placement_id": 0, 00:45:38.028 "enable_zerocopy_send_server": true, 00:45:38.028 "enable_zerocopy_send_client": false, 00:45:38.028 "zerocopy_threshold": 0, 00:45:38.028 "tls_version": 0, 00:45:38.028 "enable_ktls": false 00:45:38.028 } 00:45:38.028 }, 00:45:38.028 { 00:45:38.028 "method": "sock_impl_set_options", 00:45:38.028 "params": { 00:45:38.028 "impl_name": "posix", 00:45:38.028 "recv_buf_size": 2097152, 00:45:38.028 "send_buf_size": 2097152, 00:45:38.028 "enable_recv_pipe": true, 00:45:38.028 "enable_quickack": false, 00:45:38.028 "enable_placement_id": 0, 00:45:38.028 "enable_zerocopy_send_server": true, 00:45:38.028 "enable_zerocopy_send_client": false, 00:45:38.028 "zerocopy_threshold": 0, 00:45:38.028 "tls_version": 0, 00:45:38.028 "enable_ktls": false 00:45:38.028 } 00:45:38.028 } 00:45:38.028 ] 00:45:38.028 }, 00:45:38.028 { 00:45:38.028 "subsystem": "vmd", 00:45:38.028 "config": [] 00:45:38.028 }, 00:45:38.028 { 00:45:38.028 "subsystem": "accel", 00:45:38.028 "config": [ 00:45:38.028 { 00:45:38.028 "method": "accel_set_options", 00:45:38.028 "params": { 00:45:38.028 "small_cache_size": 128, 00:45:38.028 "large_cache_size": 16, 00:45:38.028 "task_count": 2048, 00:45:38.028 "sequence_count": 2048, 00:45:38.028 "buf_count": 2048 00:45:38.028 } 00:45:38.028 } 00:45:38.028 ] 00:45:38.028 }, 00:45:38.028 { 00:45:38.028 "subsystem": "bdev", 00:45:38.028 "config": [ 00:45:38.028 { 00:45:38.028 "method": "bdev_set_options", 00:45:38.028 "params": { 00:45:38.028 "bdev_io_pool_size": 65535, 00:45:38.028 "bdev_io_cache_size": 256, 00:45:38.028 "bdev_auto_examine": true, 00:45:38.028 "iobuf_small_cache_size": 128, 00:45:38.028 "iobuf_large_cache_size": 16 00:45:38.028 } 00:45:38.028 }, 00:45:38.028 { 00:45:38.028 "method": "bdev_raid_set_options", 00:45:38.028 "params": { 00:45:38.028 "process_window_size_kb": 1024, 00:45:38.028 "process_max_bandwidth_mb_sec": 0 00:45:38.028 } 00:45:38.028 }, 00:45:38.028 { 00:45:38.028 "method": "bdev_iscsi_set_options", 00:45:38.028 "params": { 00:45:38.028 "timeout_sec": 30 00:45:38.028 } 00:45:38.028 }, 00:45:38.028 { 00:45:38.028 "method": "bdev_nvme_set_options", 00:45:38.028 "params": { 00:45:38.028 "action_on_timeout": "none", 00:45:38.028 "timeout_us": 0, 00:45:38.028 "timeout_admin_us": 0, 00:45:38.028 "keep_alive_timeout_ms": 10000, 00:45:38.028 "arbitration_burst": 0, 00:45:38.028 "low_priority_weight": 0, 00:45:38.028 "medium_priority_weight": 0, 00:45:38.028 "high_priority_weight": 0, 00:45:38.028 "nvme_adminq_poll_period_us": 10000, 00:45:38.028 "nvme_ioq_poll_period_us": 0, 00:45:38.028 "io_queue_requests": 512, 00:45:38.028 "delay_cmd_submit": true, 00:45:38.028 "transport_retry_count": 4, 00:45:38.028 "bdev_retry_count": 3, 00:45:38.028 "transport_ack_timeout": 0, 00:45:38.028 "ctrlr_loss_timeout_sec": 0, 00:45:38.028 "reconnect_delay_sec": 0, 00:45:38.028 "fast_io_fail_timeout_sec": 0, 00:45:38.028 "disable_auto_failback": false, 00:45:38.028 "generate_uuids": false, 00:45:38.028 "transport_tos": 0, 00:45:38.028 "nvme_error_stat": false, 00:45:38.028 "rdma_srq_size": 0, 00:45:38.028 "io_path_stat": false, 00:45:38.028 "allow_accel_sequence": false, 00:45:38.028 "rdma_max_cq_size": 0, 00:45:38.028 "rdma_cm_event_timeout_ms": 0, 00:45:38.028 "dhchap_digests": [ 00:45:38.028 "sha256", 00:45:38.028 "sha384", 00:45:38.028 "sha512" 00:45:38.028 ], 00:45:38.028 "dhchap_dhgroups": [ 00:45:38.028 "null", 00:45:38.028 "ffdhe2048", 00:45:38.028 "ffdhe3072", 00:45:38.028 "ffdhe4096", 00:45:38.028 "ffdhe6144", 00:45:38.028 "ffdhe8192" 00:45:38.028 ] 00:45:38.028 } 00:45:38.028 }, 00:45:38.028 { 00:45:38.028 "method": "bdev_nvme_attach_controller", 00:45:38.028 "params": { 00:45:38.028 "name": "nvme0", 00:45:38.028 "trtype": "TCP", 00:45:38.028 "adrfam": "IPv4", 00:45:38.028 "traddr": "127.0.0.1", 00:45:38.028 "trsvcid": "4420", 00:45:38.028 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:38.028 "prchk_reftag": false, 00:45:38.028 "prchk_guard": false, 00:45:38.028 "ctrlr_loss_timeout_sec": 0, 00:45:38.028 "reconnect_delay_sec": 0, 00:45:38.028 "fast_io_fail_timeout_sec": 0, 00:45:38.028 "psk": "key0", 00:45:38.028 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:38.028 "hdgst": false, 00:45:38.028 "ddgst": false, 00:45:38.028 "multipath": "multipath" 00:45:38.028 } 00:45:38.028 }, 00:45:38.028 { 00:45:38.028 "method": "bdev_nvme_set_hotplug", 00:45:38.028 "params": { 00:45:38.028 "period_us": 100000, 00:45:38.028 "enable": false 00:45:38.028 } 00:45:38.028 }, 00:45:38.028 { 00:45:38.028 "method": "bdev_wait_for_examine" 00:45:38.028 } 00:45:38.028 ] 00:45:38.028 }, 00:45:38.028 { 00:45:38.028 "subsystem": "nbd", 00:45:38.028 "config": [] 00:45:38.028 } 00:45:38.028 ] 00:45:38.028 }' 00:45:38.029 12:14:03 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:38.029 12:14:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:38.029 [2024-11-18 12:14:03.752731] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:45:38.029 [2024-11-18 12:14:03.752905] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3228251 ] 00:45:38.029 [2024-11-18 12:14:03.893078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:38.289 [2024-11-18 12:14:04.015846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:38.548 [2024-11-18 12:14:04.433625] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:39.115 12:14:04 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:39.115 12:14:04 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:45:39.115 12:14:04 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:45:39.115 12:14:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:39.115 12:14:04 keyring_file -- keyring/file.sh@121 -- # jq length 00:45:39.115 12:14:04 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:45:39.115 12:14:04 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:45:39.115 12:14:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:39.115 12:14:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:39.115 12:14:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:39.115 12:14:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:39.115 12:14:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:39.683 12:14:05 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:45:39.683 12:14:05 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:45:39.683 12:14:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:39.683 12:14:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:39.683 12:14:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:39.683 12:14:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:39.683 12:14:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:39.683 12:14:05 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:45:39.683 12:14:05 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:45:39.683 12:14:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:45:39.683 12:14:05 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:45:39.942 12:14:05 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:45:39.942 12:14:05 keyring_file -- keyring/file.sh@1 -- # cleanup 00:45:39.942 12:14:05 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.OtLP2N3U5g /tmp/tmp.ZzOtOM1jxm 00:45:39.942 12:14:05 keyring_file -- keyring/file.sh@20 -- # killprocess 3228251 00:45:39.942 12:14:05 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3228251 ']' 00:45:39.942 12:14:05 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3228251 00:45:39.942 12:14:05 keyring_file -- common/autotest_common.sh@959 -- # uname 00:45:39.942 12:14:05 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:39.942 12:14:05 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3228251 00:45:40.200 12:14:05 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:45:40.200 12:14:05 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:45:40.200 12:14:05 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3228251' 00:45:40.200 killing process with pid 3228251 00:45:40.200 12:14:05 keyring_file -- common/autotest_common.sh@973 -- # kill 3228251 00:45:40.200 Received shutdown signal, test time was about 1.000000 seconds 00:45:40.200 00:45:40.200 Latency(us) 00:45:40.200 [2024-11-18T11:14:06.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:40.200 [2024-11-18T11:14:06.085Z] =================================================================================================================== 00:45:40.200 [2024-11-18T11:14:06.085Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:45:40.200 12:14:05 keyring_file -- common/autotest_common.sh@978 -- # wait 3228251 00:45:41.135 12:14:06 keyring_file -- keyring/file.sh@21 -- # killprocess 3226511 00:45:41.135 12:14:06 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3226511 ']' 00:45:41.135 12:14:06 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3226511 00:45:41.135 12:14:06 keyring_file -- common/autotest_common.sh@959 -- # uname 00:45:41.135 12:14:06 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:41.135 12:14:06 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3226511 00:45:41.135 12:14:06 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:41.135 12:14:06 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:41.135 12:14:06 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3226511' 00:45:41.135 killing process with pid 3226511 00:45:41.135 12:14:06 keyring_file -- common/autotest_common.sh@973 -- # kill 3226511 00:45:41.135 12:14:06 keyring_file -- common/autotest_common.sh@978 -- # wait 3226511 00:45:43.670 00:45:43.670 real 0m20.056s 00:45:43.670 user 0m45.689s 00:45:43.670 sys 0m3.677s 00:45:43.670 12:14:09 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:43.670 12:14:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:43.670 ************************************ 00:45:43.670 END TEST keyring_file 00:45:43.670 ************************************ 00:45:43.670 12:14:09 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:45:43.670 12:14:09 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:45:43.670 12:14:09 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:45:43.670 12:14:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:43.670 12:14:09 -- common/autotest_common.sh@10 -- # set +x 00:45:43.670 ************************************ 00:45:43.670 START TEST keyring_linux 00:45:43.670 ************************************ 00:45:43.670 12:14:09 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:45:43.670 Joined session keyring: 168073177 00:45:43.670 * Looking for test storage... 00:45:43.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:45:43.670 12:14:09 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:45:43.670 12:14:09 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:45:43.670 12:14:09 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:45:43.670 12:14:09 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:45:43.670 12:14:09 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:43.670 12:14:09 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:43.670 12:14:09 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:43.670 12:14:09 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:45:43.670 12:14:09 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:45:43.670 12:14:09 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:45:43.670 12:14:09 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:45:43.670 12:14:09 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:45:43.670 12:14:09 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:45:43.670 12:14:09 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:45:43.670 12:14:09 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:43.670 12:14:09 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:45:43.670 12:14:09 keyring_linux -- scripts/common.sh@345 -- # : 1 00:45:43.670 12:14:09 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:43.670 12:14:09 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:43.670 12:14:09 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:45:43.670 12:14:09 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:45:43.670 12:14:09 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:43.670 12:14:09 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:45:43.670 12:14:09 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:45:43.670 12:14:09 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:45:43.670 12:14:09 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:45:43.670 12:14:09 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:43.670 12:14:09 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:45:43.670 12:14:09 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:45:43.670 12:14:09 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:43.670 12:14:09 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:43.670 12:14:09 keyring_linux -- scripts/common.sh@368 -- # return 0 00:45:43.670 12:14:09 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:43.670 12:14:09 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:45:43.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:43.670 --rc genhtml_branch_coverage=1 00:45:43.670 --rc genhtml_function_coverage=1 00:45:43.670 --rc genhtml_legend=1 00:45:43.670 --rc geninfo_all_blocks=1 00:45:43.670 --rc geninfo_unexecuted_blocks=1 00:45:43.670 00:45:43.670 ' 00:45:43.670 12:14:09 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:45:43.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:43.670 --rc genhtml_branch_coverage=1 00:45:43.670 --rc genhtml_function_coverage=1 00:45:43.670 --rc genhtml_legend=1 00:45:43.670 --rc geninfo_all_blocks=1 00:45:43.670 --rc geninfo_unexecuted_blocks=1 00:45:43.670 00:45:43.670 ' 00:45:43.670 12:14:09 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:45:43.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:43.670 --rc genhtml_branch_coverage=1 00:45:43.670 --rc genhtml_function_coverage=1 00:45:43.670 --rc genhtml_legend=1 00:45:43.670 --rc geninfo_all_blocks=1 00:45:43.670 --rc geninfo_unexecuted_blocks=1 00:45:43.670 00:45:43.670 ' 00:45:43.670 12:14:09 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:45:43.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:43.670 --rc genhtml_branch_coverage=1 00:45:43.670 --rc genhtml_function_coverage=1 00:45:43.670 --rc genhtml_legend=1 00:45:43.670 --rc geninfo_all_blocks=1 00:45:43.670 --rc geninfo_unexecuted_blocks=1 00:45:43.670 00:45:43.670 ' 00:45:43.670 12:14:09 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:45:43.670 12:14:09 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:43.670 12:14:09 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:45:43.670 12:14:09 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:43.670 12:14:09 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:43.670 12:14:09 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:43.670 12:14:09 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:43.670 12:14:09 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:43.670 12:14:09 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:43.670 12:14:09 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:43.670 12:14:09 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:43.670 12:14:09 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:43.670 12:14:09 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:43.670 12:14:09 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:45:43.670 12:14:09 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:45:43.670 12:14:09 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:43.670 12:14:09 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:43.670 12:14:09 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:43.670 12:14:09 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:43.670 12:14:09 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:43.670 12:14:09 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:45:43.670 12:14:09 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:43.670 12:14:09 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:43.670 12:14:09 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:43.670 12:14:09 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:43.670 12:14:09 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:43.671 12:14:09 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:43.671 12:14:09 keyring_linux -- paths/export.sh@5 -- # export PATH 00:45:43.671 12:14:09 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:43.671 12:14:09 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:45:43.671 12:14:09 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:43.671 12:14:09 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:43.671 12:14:09 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:43.671 12:14:09 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:43.671 12:14:09 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:43.671 12:14:09 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:43.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:43.671 12:14:09 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:43.671 12:14:09 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:43.671 12:14:09 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:43.671 12:14:09 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:45:43.671 12:14:09 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:45:43.671 12:14:09 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:45:43.671 12:14:09 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:45:43.671 12:14:09 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:45:43.671 12:14:09 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:45:43.671 12:14:09 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:45:43.671 12:14:09 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:45:43.671 12:14:09 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:45:43.671 12:14:09 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:45:43.671 12:14:09 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:45:43.671 12:14:09 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:45:43.671 12:14:09 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:45:43.671 12:14:09 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:45:43.671 12:14:09 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:45:43.671 12:14:09 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:43.671 12:14:09 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:45:43.671 12:14:09 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:45:43.671 12:14:09 keyring_linux -- nvmf/common.sh@733 -- # python - 00:45:43.671 12:14:09 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:45:43.671 12:14:09 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:45:43.671 /tmp/:spdk-test:key0 00:45:43.671 12:14:09 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:45:43.671 12:14:09 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:45:43.671 12:14:09 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:45:43.671 12:14:09 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:45:43.671 12:14:09 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:45:43.671 12:14:09 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:45:43.671 12:14:09 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:45:43.671 12:14:09 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:45:43.671 12:14:09 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:45:43.671 12:14:09 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:43.671 12:14:09 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:45:43.671 12:14:09 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:45:43.671 12:14:09 keyring_linux -- nvmf/common.sh@733 -- # python - 00:45:43.671 12:14:09 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:45:43.671 12:14:09 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:45:43.671 /tmp/:spdk-test:key1 00:45:43.671 12:14:09 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3229008 00:45:43.671 12:14:09 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:45:43.671 12:14:09 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3229008 00:45:43.671 12:14:09 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3229008 ']' 00:45:43.671 12:14:09 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:43.671 12:14:09 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:43.671 12:14:09 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:43.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:43.671 12:14:09 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:43.671 12:14:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:43.671 [2024-11-18 12:14:09.416681] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:45:43.671 [2024-11-18 12:14:09.416837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3229008 ] 00:45:43.930 [2024-11-18 12:14:09.562565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:43.930 [2024-11-18 12:14:09.702808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:44.866 12:14:10 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:44.866 12:14:10 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:45:44.866 12:14:10 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:45:44.866 12:14:10 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:44.866 12:14:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:44.866 [2024-11-18 12:14:10.678329] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:44.866 null0 00:45:44.866 [2024-11-18 12:14:10.710375] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:45:44.866 [2024-11-18 12:14:10.711040] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:45:44.866 12:14:10 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:44.866 12:14:10 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:45:44.866 674272206 00:45:44.866 12:14:10 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:45:44.866 999194992 00:45:44.866 12:14:10 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3229218 00:45:44.866 12:14:10 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:45:44.866 12:14:10 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3229218 /var/tmp/bperf.sock 00:45:44.866 12:14:10 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3229218 ']' 00:45:44.866 12:14:10 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:44.866 12:14:10 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:44.866 12:14:10 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:44.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:44.866 12:14:10 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:44.866 12:14:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:45.124 [2024-11-18 12:14:10.826578] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:45:45.124 [2024-11-18 12:14:10.826718] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3229218 ] 00:45:45.124 [2024-11-18 12:14:10.981500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:45.382 [2024-11-18 12:14:11.117514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:45.948 12:14:11 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:45.948 12:14:11 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:45:45.948 12:14:11 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:45:45.948 12:14:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:45:46.206 12:14:12 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:45:46.206 12:14:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:45:47.144 12:14:12 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:45:47.144 12:14:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:45:47.144 [2024-11-18 12:14:12.925745] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:47.144 nvme0n1 00:45:47.402 12:14:13 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:45:47.402 12:14:13 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:45:47.402 12:14:13 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:45:47.402 12:14:13 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:45:47.402 12:14:13 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:45:47.402 12:14:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:47.661 12:14:13 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:45:47.661 12:14:13 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:45:47.661 12:14:13 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:45:47.661 12:14:13 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:45:47.661 12:14:13 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:47.661 12:14:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:47.661 12:14:13 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:45:47.920 12:14:13 keyring_linux -- keyring/linux.sh@25 -- # sn=674272206 00:45:47.920 12:14:13 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:45:47.920 12:14:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:45:47.920 12:14:13 keyring_linux -- keyring/linux.sh@26 -- # [[ 674272206 == \6\7\4\2\7\2\2\0\6 ]] 00:45:47.920 12:14:13 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 674272206 00:45:47.920 12:14:13 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:45:47.920 12:14:13 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:45:47.920 Running I/O for 1 seconds... 00:45:48.859 7765.00 IOPS, 30.33 MiB/s 00:45:48.859 Latency(us) 00:45:48.859 [2024-11-18T11:14:14.744Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:48.859 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:45:48.859 nvme0n1 : 1.02 7776.33 30.38 0.00 0.00 16323.82 4951.61 21554.06 00:45:48.859 [2024-11-18T11:14:14.744Z] =================================================================================================================== 00:45:48.859 [2024-11-18T11:14:14.744Z] Total : 7776.33 30.38 0.00 0.00 16323.82 4951.61 21554.06 00:45:48.859 { 00:45:48.859 "results": [ 00:45:48.859 { 00:45:48.859 "job": "nvme0n1", 00:45:48.859 "core_mask": "0x2", 00:45:48.859 "workload": "randread", 00:45:48.859 "status": "finished", 00:45:48.859 "queue_depth": 128, 00:45:48.859 "io_size": 4096, 00:45:48.859 "runtime": 1.015132, 00:45:48.859 "iops": 7776.328595690019, 00:45:48.859 "mibps": 30.376283576914137, 00:45:48.859 "io_failed": 0, 00:45:48.859 "io_timeout": 0, 00:45:48.859 "avg_latency_us": 16323.822788240483, 00:45:48.859 "min_latency_us": 4951.608888888889, 00:45:48.859 "max_latency_us": 21554.062222222223 00:45:48.859 } 00:45:48.859 ], 00:45:48.859 "core_count": 1 00:45:48.859 } 00:45:48.859 12:14:14 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:45:48.859 12:14:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:45:49.427 12:14:15 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:45:49.427 12:14:15 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:45:49.427 12:14:15 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:45:49.427 12:14:15 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:45:49.427 12:14:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:49.427 12:14:15 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:45:49.427 12:14:15 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:45:49.427 12:14:15 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:45:49.427 12:14:15 keyring_linux -- keyring/linux.sh@23 -- # return 00:45:49.427 12:14:15 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:49.427 12:14:15 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:45:49.427 12:14:15 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:49.427 12:14:15 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:45:49.427 12:14:15 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:49.427 12:14:15 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:45:49.427 12:14:15 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:49.427 12:14:15 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:49.428 12:14:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:49.687 [2024-11-18 12:14:15.552261] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:45:49.687 [2024-11-18 12:14:15.552992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 (107): Transport endpoint is not connected 00:45:49.687 [2024-11-18 12:14:15.553967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 (9): Bad file descriptor 00:45:49.687 [2024-11-18 12:14:15.554960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:45:49.687 [2024-11-18 12:14:15.554997] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:45:49.687 [2024-11-18 12:14:15.555023] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:45:49.687 [2024-11-18 12:14:15.555056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:45:49.687 request: 00:45:49.687 { 00:45:49.687 "name": "nvme0", 00:45:49.687 "trtype": "tcp", 00:45:49.687 "traddr": "127.0.0.1", 00:45:49.687 "adrfam": "ipv4", 00:45:49.687 "trsvcid": "4420", 00:45:49.687 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:49.687 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:49.687 "prchk_reftag": false, 00:45:49.687 "prchk_guard": false, 00:45:49.687 "hdgst": false, 00:45:49.687 "ddgst": false, 00:45:49.687 "psk": ":spdk-test:key1", 00:45:49.687 "allow_unrecognized_csi": false, 00:45:49.687 "method": "bdev_nvme_attach_controller", 00:45:49.687 "req_id": 1 00:45:49.687 } 00:45:49.687 Got JSON-RPC error response 00:45:49.687 response: 00:45:49.687 { 00:45:49.687 "code": -5, 00:45:49.687 "message": "Input/output error" 00:45:49.687 } 00:45:49.948 12:14:15 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:45:49.948 12:14:15 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:49.948 12:14:15 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:49.948 12:14:15 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:49.948 12:14:15 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:45:49.948 12:14:15 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:45:49.948 12:14:15 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:45:49.948 12:14:15 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:45:49.948 12:14:15 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:45:49.948 12:14:15 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:45:49.948 12:14:15 keyring_linux -- keyring/linux.sh@33 -- # sn=674272206 00:45:49.948 12:14:15 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 674272206 00:45:49.948 1 links removed 00:45:49.948 12:14:15 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:45:49.948 12:14:15 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:45:49.948 12:14:15 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:45:49.948 12:14:15 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:45:49.948 12:14:15 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:45:49.948 12:14:15 keyring_linux -- keyring/linux.sh@33 -- # sn=999194992 00:45:49.948 12:14:15 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 999194992 00:45:49.948 1 links removed 00:45:49.948 12:14:15 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3229218 00:45:49.948 12:14:15 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3229218 ']' 00:45:49.948 12:14:15 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3229218 00:45:49.948 12:14:15 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:45:49.948 12:14:15 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:49.948 12:14:15 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3229218 00:45:49.948 12:14:15 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:45:49.948 12:14:15 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:45:49.948 12:14:15 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3229218' 00:45:49.948 killing process with pid 3229218 00:45:49.948 12:14:15 keyring_linux -- common/autotest_common.sh@973 -- # kill 3229218 00:45:49.948 Received shutdown signal, test time was about 1.000000 seconds 00:45:49.948 00:45:49.948 Latency(us) 00:45:49.948 [2024-11-18T11:14:15.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:49.948 [2024-11-18T11:14:15.833Z] =================================================================================================================== 00:45:49.948 [2024-11-18T11:14:15.833Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:49.948 12:14:15 keyring_linux -- common/autotest_common.sh@978 -- # wait 3229218 00:45:50.889 12:14:16 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3229008 00:45:50.889 12:14:16 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3229008 ']' 00:45:50.889 12:14:16 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3229008 00:45:50.889 12:14:16 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:45:50.889 12:14:16 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:50.889 12:14:16 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3229008 00:45:50.889 12:14:16 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:50.889 12:14:16 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:50.889 12:14:16 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3229008' 00:45:50.889 killing process with pid 3229008 00:45:50.889 12:14:16 keyring_linux -- common/autotest_common.sh@973 -- # kill 3229008 00:45:50.889 12:14:16 keyring_linux -- common/autotest_common.sh@978 -- # wait 3229008 00:45:53.424 00:45:53.424 real 0m9.711s 00:45:53.424 user 0m16.757s 00:45:53.424 sys 0m1.964s 00:45:53.424 12:14:18 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:53.424 12:14:18 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:53.424 ************************************ 00:45:53.424 END TEST keyring_linux 00:45:53.424 ************************************ 00:45:53.424 12:14:18 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:45:53.424 12:14:18 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:45:53.424 12:14:18 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:45:53.424 12:14:18 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:45:53.424 12:14:18 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:45:53.424 12:14:18 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:45:53.424 12:14:18 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:45:53.424 12:14:18 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:45:53.424 12:14:18 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:45:53.424 12:14:18 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:45:53.424 12:14:18 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:45:53.424 12:14:18 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:45:53.424 12:14:18 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:45:53.424 12:14:18 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:45:53.424 12:14:18 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:45:53.424 12:14:18 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:45:53.424 12:14:18 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:45:53.424 12:14:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:53.424 12:14:18 -- common/autotest_common.sh@10 -- # set +x 00:45:53.424 12:14:18 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:45:53.424 12:14:18 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:45:53.424 12:14:18 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:45:53.424 12:14:18 -- common/autotest_common.sh@10 -- # set +x 00:45:54.799 INFO: APP EXITING 00:45:54.799 INFO: killing all VMs 00:45:54.799 INFO: killing vhost app 00:45:54.799 INFO: EXIT DONE 00:45:56.180 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:45:56.180 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:45:56.180 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:45:56.180 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:45:56.180 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:45:56.180 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:45:56.180 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:45:56.180 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:45:56.180 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:45:56.180 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:45:56.180 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:45:56.180 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:45:56.180 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:45:56.180 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:45:56.180 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:45:56.180 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:45:56.180 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:45:57.559 Cleaning 00:45:57.559 Removing: /var/run/dpdk/spdk0/config 00:45:57.559 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:45:57.559 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:45:57.559 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:45:57.559 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:45:57.559 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:45:57.559 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:45:57.559 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:45:57.559 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:45:57.559 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:45:57.559 Removing: /var/run/dpdk/spdk0/hugepage_info 00:45:57.559 Removing: /var/run/dpdk/spdk1/config 00:45:57.559 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:45:57.559 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:45:57.559 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:45:57.559 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:45:57.559 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:45:57.559 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:45:57.559 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:45:57.559 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:45:57.559 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:45:57.559 Removing: /var/run/dpdk/spdk1/hugepage_info 00:45:57.559 Removing: /var/run/dpdk/spdk2/config 00:45:57.559 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:45:57.559 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:45:57.559 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:45:57.559 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:45:57.559 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:45:57.559 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:45:57.560 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:45:57.560 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:45:57.560 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:45:57.560 Removing: /var/run/dpdk/spdk2/hugepage_info 00:45:57.560 Removing: /var/run/dpdk/spdk3/config 00:45:57.560 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:45:57.560 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:45:57.560 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:45:57.560 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:45:57.560 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:45:57.560 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:45:57.560 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:45:57.560 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:45:57.560 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:45:57.560 Removing: /var/run/dpdk/spdk3/hugepage_info 00:45:57.560 Removing: /var/run/dpdk/spdk4/config 00:45:57.560 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:45:57.560 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:45:57.560 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:45:57.560 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:45:57.560 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:45:57.560 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:45:57.560 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:45:57.560 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:45:57.560 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:45:57.560 Removing: /var/run/dpdk/spdk4/hugepage_info 00:45:57.560 Removing: /dev/shm/bdev_svc_trace.1 00:45:57.560 Removing: /dev/shm/nvmf_trace.0 00:45:57.560 Removing: /dev/shm/spdk_tgt_trace.pid2815369 00:45:57.560 Removing: /var/run/dpdk/spdk0 00:45:57.560 Removing: /var/run/dpdk/spdk1 00:45:57.560 Removing: /var/run/dpdk/spdk2 00:45:57.560 Removing: /var/run/dpdk/spdk3 00:45:57.560 Removing: /var/run/dpdk/spdk4 00:45:57.560 Removing: /var/run/dpdk/spdk_pid2812478 00:45:57.560 Removing: /var/run/dpdk/spdk_pid2813617 00:45:57.560 Removing: /var/run/dpdk/spdk_pid2815369 00:45:57.560 Removing: /var/run/dpdk/spdk_pid2816097 00:45:57.560 Removing: /var/run/dpdk/spdk_pid2817332 00:45:57.560 Removing: /var/run/dpdk/spdk_pid2818086 00:45:57.560 Removing: /var/run/dpdk/spdk_pid2819073 00:45:57.560 Removing: /var/run/dpdk/spdk_pid2819214 00:45:57.560 Removing: /var/run/dpdk/spdk_pid2819854 00:45:57.560 Removing: /var/run/dpdk/spdk_pid2821326 00:45:57.560 Removing: /var/run/dpdk/spdk_pid2822487 00:45:57.560 Removing: /var/run/dpdk/spdk_pid2823110 00:45:57.560 Removing: /var/run/dpdk/spdk_pid2823695 00:45:57.560 Removing: /var/run/dpdk/spdk_pid2824183 00:45:57.560 Removing: /var/run/dpdk/spdk_pid2824785 00:45:57.560 Removing: /var/run/dpdk/spdk_pid2825021 00:45:57.560 Removing: /var/run/dpdk/spdk_pid2825227 00:45:57.560 Removing: /var/run/dpdk/spdk_pid2825539 00:45:57.560 Removing: /var/run/dpdk/spdk_pid2825864 00:45:57.560 Removing: /var/run/dpdk/spdk_pid2828632 00:45:57.560 Removing: /var/run/dpdk/spdk_pid2829188 00:45:57.560 Removing: /var/run/dpdk/spdk_pid2829741 00:45:57.560 Removing: /var/run/dpdk/spdk_pid2829884 00:45:57.560 Removing: /var/run/dpdk/spdk_pid2831115 00:45:57.560 Removing: /var/run/dpdk/spdk_pid2831259 00:45:57.560 Removing: /var/run/dpdk/spdk_pid2832609 00:45:57.560 Removing: /var/run/dpdk/spdk_pid2832761 00:45:57.560 Removing: /var/run/dpdk/spdk_pid2833191 00:45:57.819 Removing: /var/run/dpdk/spdk_pid2833330 00:45:57.819 Removing: /var/run/dpdk/spdk_pid2833750 00:45:57.819 Removing: /var/run/dpdk/spdk_pid2833899 00:45:57.819 Removing: /var/run/dpdk/spdk_pid2834943 00:45:57.819 Removing: /var/run/dpdk/spdk_pid2835212 00:45:57.819 Removing: /var/run/dpdk/spdk_pid2835429 00:45:57.819 Removing: /var/run/dpdk/spdk_pid2838066 00:45:57.819 Removing: /var/run/dpdk/spdk_pid2840849 00:45:57.819 Removing: /var/run/dpdk/spdk_pid2848842 00:45:57.819 Removing: /var/run/dpdk/spdk_pid2849259 00:45:57.819 Removing: /var/run/dpdk/spdk_pid2851918 00:45:57.819 Removing: /var/run/dpdk/spdk_pid2852199 00:45:57.819 Removing: /var/run/dpdk/spdk_pid2855115 00:45:57.819 Removing: /var/run/dpdk/spdk_pid2859122 00:45:57.819 Removing: /var/run/dpdk/spdk_pid2861533 00:45:57.819 Removing: /var/run/dpdk/spdk_pid2868764 00:45:57.819 Removing: /var/run/dpdk/spdk_pid2874424 00:45:57.819 Removing: /var/run/dpdk/spdk_pid2875754 00:45:57.819 Removing: /var/run/dpdk/spdk_pid2876668 00:45:57.819 Removing: /var/run/dpdk/spdk_pid2888353 00:45:57.819 Removing: /var/run/dpdk/spdk_pid2890911 00:45:57.819 Removing: /var/run/dpdk/spdk_pid2949011 00:45:57.819 Removing: /var/run/dpdk/spdk_pid2952450 00:45:57.819 Removing: /var/run/dpdk/spdk_pid2956669 00:45:57.819 Removing: /var/run/dpdk/spdk_pid2962920 00:45:57.819 Removing: /var/run/dpdk/spdk_pid2992859 00:45:57.819 Removing: /var/run/dpdk/spdk_pid2996042 00:45:57.819 Removing: /var/run/dpdk/spdk_pid2997225 00:45:57.819 Removing: /var/run/dpdk/spdk_pid2998681 00:45:57.819 Removing: /var/run/dpdk/spdk_pid2998955 00:45:57.819 Removing: /var/run/dpdk/spdk_pid2999246 00:45:57.819 Removing: /var/run/dpdk/spdk_pid2999636 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3000472 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3001946 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3003338 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3004040 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3005922 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3006737 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3007461 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3010236 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3013919 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3013920 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3013921 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3016402 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3018750 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3022895 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3046393 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3050065 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3054140 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3055680 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3057308 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3058797 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3061942 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3064950 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3067702 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3072351 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3072470 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3075515 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3075652 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3075906 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3076176 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3076288 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3077384 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3078579 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3079973 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3081660 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3082837 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3084016 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3088073 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3088407 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3089802 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3090661 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3094653 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3096753 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3100588 00:45:57.819 Removing: /var/run/dpdk/spdk_pid3104299 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3111727 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3116405 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3116430 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3129567 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3130228 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3130893 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3131552 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3132536 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3133079 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3133740 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3134289 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3137181 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3137454 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3141622 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3142266 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3145925 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3148684 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3155858 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3156264 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3158900 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3159177 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3162071 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3166015 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3168306 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3175961 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3181556 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3182861 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3183655 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3194615 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3197145 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3199274 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3204702 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3204829 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3207923 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3210132 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3211660 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3212519 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3214046 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3215041 00:45:57.820 Removing: /var/run/dpdk/spdk_pid3220702 00:45:58.079 Removing: /var/run/dpdk/spdk_pid3221089 00:45:58.079 Removing: /var/run/dpdk/spdk_pid3221487 00:45:58.079 Removing: /var/run/dpdk/spdk_pid3223377 00:45:58.079 Removing: /var/run/dpdk/spdk_pid3223761 00:45:58.079 Removing: /var/run/dpdk/spdk_pid3224058 00:45:58.079 Removing: /var/run/dpdk/spdk_pid3226511 00:45:58.079 Removing: /var/run/dpdk/spdk_pid3226650 00:45:58.079 Removing: /var/run/dpdk/spdk_pid3228251 00:45:58.079 Removing: /var/run/dpdk/spdk_pid3229008 00:45:58.079 Removing: /var/run/dpdk/spdk_pid3229218 00:45:58.079 Clean 00:45:58.079 12:14:23 -- common/autotest_common.sh@1453 -- # return 0 00:45:58.079 12:14:23 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:45:58.079 12:14:23 -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:58.079 12:14:23 -- common/autotest_common.sh@10 -- # set +x 00:45:58.079 12:14:23 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:45:58.079 12:14:23 -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:58.079 12:14:23 -- common/autotest_common.sh@10 -- # set +x 00:45:58.079 12:14:23 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:45:58.079 12:14:23 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:45:58.079 12:14:23 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:45:58.079 12:14:23 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:45:58.079 12:14:23 -- spdk/autotest.sh@398 -- # hostname 00:45:58.079 12:14:23 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:45:58.337 geninfo: WARNING: invalid characters removed from testname! 00:46:30.404 12:14:52 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:30.665 12:14:56 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:33.947 12:14:59 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:36.477 12:15:02 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:39.856 12:15:05 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:42.386 12:15:07 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:44.918 12:15:10 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:46:44.918 12:15:10 -- spdk/autorun.sh@1 -- $ timing_finish 00:46:44.918 12:15:10 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:46:44.918 12:15:10 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:46:44.918 12:15:10 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:46:44.918 12:15:10 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:46:44.918 + [[ -n 2740118 ]] 00:46:44.918 + sudo kill 2740118 00:46:44.930 [Pipeline] } 00:46:44.942 [Pipeline] // stage 00:46:44.947 [Pipeline] } 00:46:44.958 [Pipeline] // timeout 00:46:44.963 [Pipeline] } 00:46:44.974 [Pipeline] // catchError 00:46:44.978 [Pipeline] } 00:46:44.990 [Pipeline] // wrap 00:46:44.995 [Pipeline] } 00:46:45.005 [Pipeline] // catchError 00:46:45.012 [Pipeline] stage 00:46:45.014 [Pipeline] { (Epilogue) 00:46:45.024 [Pipeline] catchError 00:46:45.025 [Pipeline] { 00:46:45.034 [Pipeline] echo 00:46:45.035 Cleanup processes 00:46:45.039 [Pipeline] sh 00:46:45.324 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:46:45.324 3243466 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:46:45.339 [Pipeline] sh 00:46:45.629 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:46:45.629 ++ grep -v 'sudo pgrep' 00:46:45.629 ++ awk '{print $1}' 00:46:45.629 + sudo kill -9 00:46:45.629 + true 00:46:45.642 [Pipeline] sh 00:46:45.927 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:46:58.151 [Pipeline] sh 00:46:58.439 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:46:58.439 Artifacts sizes are good 00:46:58.454 [Pipeline] archiveArtifacts 00:46:58.462 Archiving artifacts 00:46:58.621 [Pipeline] sh 00:46:58.907 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:46:58.921 [Pipeline] cleanWs 00:46:58.931 [WS-CLEANUP] Deleting project workspace... 00:46:58.931 [WS-CLEANUP] Deferred wipeout is used... 00:46:58.938 [WS-CLEANUP] done 00:46:58.940 [Pipeline] } 00:46:58.957 [Pipeline] // catchError 00:46:58.969 [Pipeline] sh 00:46:59.299 + logger -p user.info -t JENKINS-CI 00:46:59.308 [Pipeline] } 00:46:59.321 [Pipeline] // stage 00:46:59.327 [Pipeline] } 00:46:59.341 [Pipeline] // node 00:46:59.347 [Pipeline] End of Pipeline 00:46:59.387 Finished: SUCCESS